| Literature DB >> 32858803 |
Wellington Castro1, José Marcato Junior2, Caio Polidoro1, Lucas Prado Osco3, Wesley Gonçalves2, Lucas Rodrigues1, Mateus Santos4, Liana Jank4, Sanzio Barrios4, Cacilda Valle4, Rosangela Simeão4, Camilo Carromeu4, Eloise Silveira2, Lúcio André de Castro Jorge5, Edson Matsubara1.
Abstract
Monitoring biomass of forages in experimental plots and livestock farms is a time-consuming, expensive, and biased task. Thus, non-destructive, accurate, precise, and quick phenotyping strategies for biomass yield are needed. To promote high-throughput phenotyping in forages, we propose and evaluate the use of deep learning-based methods and UAV (Unmanned Aerial Vehicle)-based RGB images to estimate the value of biomass yield by different genotypes of the forage grass species Panicum maximum Jacq. Experiments were conducted in the Brazilian Cerrado with 110 genotypes with three replications, totaling 330 plots. Two regression models based on Convolutional Neural Networks (CNNs) named AlexNet and ResNet18 were evaluated, and compared to VGGNet-adopted in previous work in the same thematic for other grass species. The predictions returned by the models reached a correlation of 0.88 and a mean absolute error of 12.98% using AlexNet considering pre-training and data augmentation. This proposal may contribute to forage biomass estimation in breeding populations and livestock areas, as well as to reduce the labor in the field.Entities:
Keywords: Convolutional Neural Network; biomass yield; data augmentation; phenotyping
Mesh:
Year: 2020 PMID: 32858803 PMCID: PMC7506807 DOI: 10.3390/s20174802
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Dataset location considering WGS-84 reference system: (a) South America/Brazil; (b) Mato Grosso do Sul State; (c) Campo Grande municipality and; (d) Study area.
Figure 2Plots identification procedure: (a) Orthomosaic; (b) user defined experiment field in red and; (c) plots defined using our Python script.
Figure 3Class attribute y distribution-biomass in kg·ha.
Experimental setup.
| #Experiment | Model | Batch Size | Data-Set | Epochs |
|---|---|---|---|---|
| 1 | AlexNet | 256 |
| 400 |
| 2 | AlexNet | 256 |
| 400 |
| 3 | AlexNet | 256 |
| 500 |
| 4 | Resnet18 | 128 |
| 500 |
| 5 | Resnet18 | 128 |
| 500 |
| 6 | Resnet18 | 128 |
| 500 |
| 7 | AlexNet Pre-Trained | 256 |
| 200 |
| 8 | AlexNet Pre-Trained | 256 |
| 200 |
| 9 | AlexNet Pre-Trained | 256 |
| 200 |
| 10 | ResNet18 Pre-Trained | 128 |
| 500 |
| 11 | ResNet18 Pre-Trained | 128 |
| 400 |
| 12 | ResNet18 Pre-Trained | 128 |
| 400 |
| 13 | VGGNet11 Pre-Trained | 64 |
| 400 |
Experimental results. Experiment #9’s results present the lowest MAE (mean absolute error) and highest correlation. It is good to remind the reader that , ranging from 1556.00 kg·ha to 15,333.00 kg·ha, therefore MAE of 730 represents a variation of 730 kg·ha in this range of values.
| #Experiment | Model | Mean Absolute Error | Mean Absolute | Correlation ( |
|---|---|---|---|---|
| 1 | AlexNet | 837 ± 106 | 14.58 ± 2.52 | 0.84 ± 0.03 |
| 2 | AlexNet | 880 ± 202 | 15.11 ± 3.24 | 0.83 ± 0.06 |
| 3 | AlexNet | 924 ± 143 | 15.48 ± 2.30 | 0.82 ± 0.05 |
| 4 | ResNet18 | 1086 ± 219 | 17.70 ± 3.41 | 0.74 ± 0.06 |
| 5 | ResNet18 | 1046 ± 107 | 19.01 ± 2.77 | 0.74 ± 0.06 |
| 6 | ResNet18 | 1031 ± 153 | 18.76 ± 4.28 | 0.75 ± 0.06 |
| 7 | AlexNet Pre-Trained | 759 ± 102 | 13.23 ± 2.23 | 0.87 ± 0.05 |
| 8 | AlexNet Pre-Trained | 768 ± 123 | 13.54 ± 2.88 | 0.87 ± 0.03 |
| 9 | AlexNet Pre-Trained | 730 ± 59 | 12.98 ± 2.18 | 0.88 ± 0.04 |
| 10 | ResNet18 Pre-Trained | 1206 ± 233 | 19.46 ± 5.15 | 0.73 ± 0.04 |
| 11 | ResNet18 Pre-Trained | 1205 ± 194 | 23.16 ± 4.80 | 0.71 ± 0.07 |
| 12 | ResNet18 Pre-Trained | 1012 ± 128 | 18.58 ± 2.34 | 0.77 ± 0.05 |
| 13 | VGGNet11 Pre-Trained | 825 ± 152 | 13.89 ± 3.09 | 0.84 ± 0.04 |
Figure 4Mean and the 95% confidence interval of a pos-hoc Tukey’s HSD test performed on MAE results.
Figure 5Predicted vs. real plots.
Figure 6ROC (Receiver Operating Characteristic) for regression. Points closer to (0,0) mean better results.
Figure 7Comparison of prediction vs. real y data distribution. Larger intersecting areas between histograms indicates better prediction.
Intersection areas of the histograms shown in Figure 7.
|
| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
|
| 0.83 | 0.83 | 0.91 | 0.78 | 0.72 | 0.78 | 0.89 | 0.89 | 0.92 | 0.68 | 0.62 | 0.76 | 0.90 |
Figure 8Validation loss over epochs for all experiments.
Figure 9Heatmaps of the top 3 best predictions made by Experiment #9 (Pre-trained AlexNet Model with hv data augmentation): (a) First best prediction; (b) Second best prediction; (c) Third best prediction.
Figure 10Heatmaps of the top 3 worst predictions made by Experiment #9 (Pre-trained AlexNet Model with hv data augmentation): (a) First worst prediction; (b) Second worst prediction; (c) Third worst prediction.
Training and test time on one-fold of the cross-validation procedure.
| #Experiment | Model | Training Time (min) | Test Time (s) |
|---|---|---|---|
| 1 | AlexNet | 35.8 | 0.39 |
| 2 | AlexNet h | 95.6 | 0.46 |
| 3 | AlexNet hv | 122.2 | 0.40 |
| 4 | ResNet18 | 60.2 | 0.47 |
| 5 | ResNet18 h | 131.3 | 0.54 |
| 6 | ResNet18 hv | 133.7 | 0.53 |
| 7 | AlexNet Pre-Trained | 15.8 | 0.37 |
| 8 | AlexNet Pre-Trained h | 36.2 | 0.44 |
| 9 | AlexNet Pre-Trained hv | 43.7 | 0.50 |
| 10 | ResNet18 Pre-Trained | 58.1 | 0.47 |
| 11 | ResNet18 Pre-Trained h | 102.2 | 0.67 |
| 12 | ResNet18 Pre-Trained hv | 104.4 | 0.43 |
| 13 | VGGNet11 Pre-Trained hv | 372.2 | 0.81 |