| Literature DB >> 28425947 |
Maryam Rahnemoonfar1, Clay Sheppard2.
Abstract
Recent years have witnessed significant advancement in computer vision research based on deep learning. Success of these tasks largely depends on the availability of a large amount of training samples. Labeling the training samples is an expensive process. In this paper, we present a simulated deep convolutional neural network for yield estimation. Knowing the exact number of fruits, flowers, and trees helps farmers to make better decisions on cultivation practices, plant disease prevention, and the size of harvest labor force. The current practice of yield estimation based on the manual counting of fruits or flowers by workers is a very time consuming and expensive process and it is not practical for big fields. Automatic yield estimation based on robotic agriculture provides a viable solution in this regard. Our network is trained entirely on synthetic data and tested on real data. To capture features on multiple scales, we used a modified version of the Inception-ResNet architecture. Our algorithm counts efficiently even if fruits are under shadow, occluded by foliage, branches, or if there is some degree of overlap amongst fruits. Experimental results show a 91% average test accuracy on real images and 93% on synthetic images.Entities:
Keywords: agricultural sensors; deep learning; simulated learning; yield estimation
Mesh:
Year: 2017 PMID: 28425947 PMCID: PMC5426829 DOI: 10.3390/s17040905
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The framework of our research.
Figure 2Synthetic image generation.
Figure 3The architecture of our network.
Figure 4Modified Inception-ResNet-A module.
Figure 5Modified reduction module.
Figure 6Mean square error for training at a dropout value of 65%.
Real tomato images with predicted (P) and actual count (GT).
| R | P | GT | R | P | GT | R | P | GT | R | P | GT |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 36 | 38 | 27 | 24 | 18 | 17 | 27 | 28 | ||||
| 22 | 25 | 21 | 23 | 15 | 14 | 12 | 12 | ||||
| 22 | 22 | 13 | 12 | 14 | 14 | 14 | 13 | ||||
| 20 | 25 | 19 | 19 | 38 | 39 | 16 | 16 | ||||
| 22 | 22 | 16 | 17 | 16 | 19 | 24 | 24 |
Figure 7The accuracy for all 100 images.
Figure 8A linear regression between computed and actual counts for 100 real tomato images.
Figure 9A linear regression between computed counts by the area-based method and the actual count for 100 real tomato images.
Average accuracy over 100 images.
| Method | Average Accuracy (%) |
|---|---|
| Proposed method | 91.03 |
| Area-based counting | 66.16 |
| Shallow network | 11.60 |
| Our network with the original Inception-ResNet-A | 76.00 |
Average time for counting.
| Method | Average Time Required for One Test Image (second) |
|---|---|
| Proposed method | 0.006 |
| Area-based method | 0.05 |
| Manual counting | 6.5 |