| Literature DB >> 34063578 |
Rafael Faria Caldeira1,2, Wesley Esdras Santiago2, Barbara Teruel1.
Abstract
The use of deep learning models to identify lesions on cotton leaves on the basis of images of the crop in the field is proposed in this article. Cultivated in most of the world, cotton is one of the economically most important agricultural crops. Its cultivation in tropical regions has made it the target of a wide spectrum of agricultural pests and diseases, and efficient solutions are required. Moreover, the symptoms of the main pests and diseases cannot be differentiated in the initial stages, and the correct identification of a lesion can be difficult for the producer. To help resolve the problem, the present research provides a solution based on deep learning in the screening of cotton leaves which makes it possible to monitor the health of the cotton crop and make better decisions for its management. With the learning models GoogleNet and Resnet50 using convolutional neural networks, a precision of 86.6% and 89.2%, respectively, was obtained. Compared with traditional approaches for the processing of images such as support vector machines (SVM), Closest k-neighbors (KNN), artificial neural networks (ANN) and neuro-fuzzy (NFC), the convolutional neural networks proved to be up to 25% more precise, suggesting that this method can contribute to a more rapid and reliable inspection of the plants growing in the field.Entities:
Keywords: artificial intelligence; convolutional neural networks; image processing; precision agriculture
Mesh:
Year: 2021 PMID: 34063578 PMCID: PMC8124293 DOI: 10.3390/s21093169
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Examples of images of (a) background (soil), (b) lesioned leaf, and (c) healthy leaf.
Figure 2Pipeline adopted. In this paper, Steps III and IV were replaced by deep learning models.
Statistical texture attributes.
| Characteristic | Description | Equation |
|---|---|---|
| I1 | Average |
|
| I2 | Standard deviation |
|
| I3 | Smoothness |
|
| I4 | Third moment |
|
| I5 | Uniformity |
|
| I6 | Entropy |
|
Figure 3Diagram of a Convolutional Neural Network. (Reprinted with permission from ref. [42]).
Performance measurements derived from the confusion matrix.
| Performance Metric | Equation |
|---|---|
| Sensitivity (Recall) | TP/(TP + FN) |
| Specificity | TN/(TN + FP) |
| Overall Accuracy | (TP + TN)/(TP + FP + TN + FN) |
| Precision | TP/(TP + FP) |
| F-Score | (2 × Precision × Recall)/(Precision + Recall) |
Performance of classifiers using the original testing data set.
| Algorithm | Overall Accuracy |
|---|---|
| SVM | 80.30% |
| NFC | 71.10% |
| RNA | 76.60% |
| KNN | 78.80% |
Figure 4Distribution of the frequency in the original data labeled manually in the data bank of the training and testing data.
Figure 5Confusion matrix of the test data for (a) Google Net and (b) ResNet50.
Figure 6Report of the performance of the two CNN models for each class: (a) Google Net and (b) ResNet50.
Figure 7Comparison of Receiver Operating Characteristic (ROC) curves for each class for the CNN models of (a) GoogleNet and (b) ResNet50.