| Literature DB >> 27713752 |
Sharada P Mohanty1, David P Hughes2, Marcel Salathé1.
Abstract
Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale.Entities:
Keywords: crop diseases; deep learning; digital epidemiology; machine learning
Year: 2016 PMID: 27713752 PMCID: PMC5032846 DOI: 10.3389/fpls.2016.01419
Source DB: PubMed Journal: Front Plant Sci ISSN: 1664-462X Impact factor: 5.753
Figure 1Example of leaf images from the PlantVillage dataset, representing every crop-disease pair used. (1) Apple Scab, Venturia inaequalis (2) Apple Black Rot, Botryosphaeria obtusa (3) Apple Cedar Rust, Gymnosporangium juniperi-virginianae (4) Apple healthy (5) Blueberry healthy (6) Cherry healthy (7) Cherry Powdery Mildew, Podoshaera clandestine (8) Corn Gray Leaf Spot, Cercospora zeae-maydis (9) Corn Common Rust, Puccinia sorghi (10) Corn healthy (11) Corn Northern Leaf Blight, Exserohilum turcicum (12) Grape Black Rot, Guignardia bidwellii, (13) Grape Black Measles (Esca), Phaeomoniella aleophilum, Phaeomoniella chlamydospora (14) Grape Healthy (15) Grape Leaf Blight, Pseudocercospora vitis (16) Orange Huanglongbing (Citrus Greening), Candidatus Liberibacter spp. (17) Peach Bacterial Spot, Xanthomonas campestris (18) Peach healthy (19) Bell Pepper Bacterial Spot, Xanthomonas campestris (20) Bell Pepper healthy (21) Potato Early Blight, Alternaria solani (22) Potato healthy (23) Potato Late Blight, Phytophthora infestans (24) Raspberry healthy (25) Soybean healthy (26) Squash Powdery Mildew, Erysiphe cichoracearum (27) Strawberry Healthy (28) Strawberry Leaf Scorch, Diplocarpon earlianum (29) Tomato Bacterial Spot, Xanthomonas campestris pv. vesicatoria (30) Tomato Early Blight, Alternaria solani (31) Tomato Late Blight, Phytophthora infestans (32) Tomato Leaf Mold, Passalora fulva (33) Tomato Septoria Leaf Spot, Septoria lycopersici (34) Tomato Two Spotted Spider Mite, Tetranychus urticae (35) Tomato Target Spot, Corynespora cassiicola (36) Tomato Mosaic Virus (37) Tomato Yellow Leaf Curl Virus (38) Tomato healthy.
Figure 2Sample images from the three different versions of the PlantVillage dataset used in various experimental configurations. (A) Leaf 1 color, (B) Leaf 1 grayscale, (C) Leaf 1 segmented, (D) Leaf 2 color, (E) Leaf 2 gray-scale, (F) Leaf 2 segmented.
Figure 3Progression of mean F. The intensity of a particular class at any point is proportional to the corresponding uncertainty across all experiments with the particular configurations. (A) Comparison of progression of mean F1 score across all experiments, grouped by deep learning architecture, (B) Comparison of progression of mean F1 score across all experiments, grouped by training mechanism, (C) Comparison of progression of train-loss and test-loss across all experiments, (D) Comparison of progression of mean F1 score across all experiments, grouped by train-test set splits, (E) Comparison of progression of mean F1 score across all experiments, grouped by dataset type. A similar plot of all the observations, as it is, across all the experimental configurations can be found in the Supplementary Material.
Mean F.
| Color | 0.9736{0.9742, 0.9737, 0.9738} | 0.9118{0.9137, 0.9132, 0.9130} | 0.9430{0.9440, 0.9431, 0.9429} | |
| Grayscale | 0.9361{0.9368, 0.9369, 0.9371} | 0.8524{0.8539, 0.8555, 0.8553} | 0.8828{0.8842, 0.6835,0.8841} | |
| Segmented | 0.9724{0.9727, 0.9727, 0.9726} | 0.8945{0.8956, 0.8963, 0.8969} | 0.9377{0.9388, 0.9380, 0.9380} | |
| Color | 0.9860{0.9861, 0.9861, 0.9860} | 0.9555{0.9557, 0.9558, 0.9558} | 0.9729{0.9731, 0.9729, 0.9729} | |
| Grayscale | 0.9584{0.9588, 0.9589, 0.9588} | 0.9088{0.9090, 0.9101, 0.9100} | 0.9361{0.9364, 0.9363, 0.9364} | |
| Segmented | 0.9812{0.9814, 0.9813, 0.9813} | 0.9404{0.9409, 0.9408, 0.9408} | 0.9643{0.9647, 0.9642, 0.9642} | |
| Color | 0.9896{0.9897, 0.9896, 0.9897} | 0.9644{0.9647, 0.9647, 0.9647} | 0.9772{0.9774, 0.9773, 0.9773} | |
| Grayscale | 0.9661{0.9663, 0.9663, 0.9663} | 0.9312{0.9315, 0.9318, 0.9319} | 0.9507{0.9510, 0.9507, 0.9509} | |
| Segmented | 0.9867{0.9868, 0.9868, 0.9869} | 0.9551{0.9552, 0.9555, 0.9556} | 0.9720{0.9721, 0.9721, 0.9722} | |
| Color | 0.9907{0.9908, 0.9908, 0.9907} | 0.9724{0.9725, 0.9725, 0.9725} | 0.9824{0.9825, 0.9824, 0.9824} | |
| Grayscale | 0.9686{0.9689, 0.9688, 0.9688} | 0.9388{0.9396, 0.9395, 0.9391} | 0.9547{0.9554, 0.9548, 0.9551} | |
| Segmented | 0.9855{0.9856, 0.9856, 0.9856} | 0.9595{0.9597, 0.9597, 0.9596} | 0.9740{0.9743, 0.9740, 0.9745} | |
| Color | ||||
| Grayscale | ||||
| Segmented | ||||
Each cell in the table represents the mean F1 score{mean precision, mean recall, overall accuracy} for the corresponding experimental configuration.
The bold values are the F1 scores of the best performing models in the respective row/column.
Figure 4Visualization of activations in the initial layers of an AlexNet architecture demonstrating that the model has learnt to efficiently activate against the diseased spots on the example leaf. (A) Example image of a leaf suffering from Apple Cedar Rust, selected from the top-20 images returned by Bing Image search for the keywords “Apple Cedar Rust Leaves” on April 4th, 2016. Image Reference: Clemson University - USDA Cooperative Extension Slide Series, Bugwood. org. (B) Visualization of activations in the first convolution layer(conv1) of an AlexNet architecture trained using AlexNet:Color:TrainFromScratch:80–20 when doing a forward pass on the image in shown in panel b.