| Literature DB >> 35968221 |
Abdelmalek Bouguettaya1, Hafed Zarzour2, Ahmed Kechida1, Amine Mohammed Taberkit1.
Abstract
The agricultural crop productivity can be affected and reduced due to many factors such as weeds, pests, and diseases. Traditional methods that are based on terrestrial engines, devices, and farmers' naked eyes are facing many limitations in terms of accuracy and the required time to cover large fields. Currently, precision agriculture that is based on the use of deep learning algorithms and Unmanned Aerial Vehicles (UAVs) provides an effective solution to achieve agriculture applications, including plant disease identification and treatment. In the last few years, plant disease monitoring using UAV platforms is one of the most important agriculture applications that have gained increasing interest by researchers. Accurate detection and treatment of plant diseases at early stages is crucial to improving agricultural production. To this end, in this review, we analyze the recent advances in the use of computer vision techniques that are based on deep learning algorithms and UAV technologies to identify and treat crop diseases.Entities:
Keywords: Computer vision; Convolutional neural network; Deep learning; Plant disease; Precision agriculture; Unmanned Aerial Vehicles
Year: 2022 PMID: 35968221 PMCID: PMC9362359 DOI: 10.1007/s10586-022-03627-x
Source DB: PubMed Journal: Cluster Comput ISSN: 1386-7857 Impact factor: 2.303
Fig. 1Search strategy flowchart
Fig. 2Different agricultural UAV types
Most used UAVs in the literature for crop disease monitoring
| UAV brand | UAV category | Product name | References |
|---|---|---|---|
| DJI | Quadcopter | Phantom 3 Pro | Tetila et al. [ |
| Phantom 4 | Huang et al. [ | ||
| Sentinel 2 | Pan et al. [ | ||
| Phantom 4 RTK | Wu et al. [ | ||
| Phantom 4 Pro | Gomez Selvaraj et al. [ | ||
| Dang et al. [ | |||
| Matrice 100 | Su et al. [ | ||
| Hexacopter | Matrice 600 | [ Wiesner-Hanks et al. [ | |
| Görlich et al. [ | |||
| Matrice 600 Pro | Abdulridha et al. [ | ||
| Octocopter | S1000 | Zhang et al. [ | |
| 3D Robotics | Quadcopter | 3DR SOLO | Gomez Selvaraj et al. [ |
| 3DR IRIS+ | Duarte-Carvajalino et al. [ | ||
| ING Robotic Aviation | Helicopter | Responder | Théau et al. [ |
| Italdron | Quadcopter | 4HSE EVO | Di Nisio et al. [ |
| Feima Robotics | Quadcopter | Feima D200 | Qin et al. [ |
| Delair-Tech | Fixed-wing | DT-18 | Albetis et al. [ |
Pros and cons of agricultural UAV types
| UAV type | Pros | Cons |
|---|---|---|
| Rotary-wing | Easy control and maneuver | Limited flying time |
| Ability to hover | Low area coverage | |
| Take off and land vertically | Small payload capabilities | |
| Very stable | High energy consumption | |
| Better spatial/temporal resolutions | ||
| Low cost | ||
| Accessibility to difficult areas | ||
| Fixed-wing | High payload capabilities | Expensive |
| Long flying time | Require a launcher to put in the air | |
| Large area coverage | Difficult to land | |
| High speed | Do not have hovering ability | |
| Lower energy consumption | ||
| Hybrid VTOL | Long flying time | Expensive |
| Large area coverage | Do not have hovering ability | |
| High speed | ||
| Vertical take-off and landing ability | ||
| Relatively low energy consumption |
Deep learning-based crop and plant diseases identification methods from UAV imagery
| References | Crop | Disease | Altitude/GSD (m) | Data type | Model | Data size | Input data size | Performance |
|---|---|---|---|---|---|---|---|---|
| Tetila et al. [ | Soybean | Leaf Diseases | 2/- | RGB | Inception-v3 (FT 75%) | 3000 | 256 | 99.04% (accuracy) |
| ResNet-50 (FT 75%) | 99.02% (accuracy) | |||||||
| VGG-19 (FT 100%) | 99.02% (accuracy) | |||||||
| Xception (FT 100%) | 98.56% (accuracy) | |||||||
| Gomez Selvaraj et al. [ | Banana | BBTD, BXW | 50 to 100/1 to 3 | RGB | VGG-16 | 3300 | 64 | 85% (accuracy) |
| Custom CNN | 92% (accuracy) | |||||||
| Görlich et al. [ | Sugar beet | Cercospora Leaf Spot | – | RGB | FCN | – | 320 | 76% (precision), 83.87% (recall), 75.74% (F1) |
| Dang et al. [ | Radish | Fusarium Wilt | 3, 7, and 15/- | RGB | RadRGB | 1700 | 64 | 96.4% (accuracy), 0.043 s/image (testing time) |
| Inception-V3 | 95.7% (accuracy), 0.1 s/image (testing time) | |||||||
| VGG-16 | 93.1% (accuracy), 0.22 s/image (testing time) | |||||||
| Wu et al. [ | Pine | Pine Wilt Disease | 120/0.04 | RGB | Faster R-CNN (ResNet-50) | 476 | – | 60,2% (mAP), 134 MB (model size), 0.191 (FPS) |
| Faster R-CNN (ResNet-101) | 62,2% (mAP), 208 MB (model size), 0.18 (FPS) | |||||||
| YOLOv3 (DarkNet-53) | 64% (mAP), 241 MB (model size), 1.066 (FPS) | |||||||
| YOLOv3 (MobileNet) | 63,2% (mAP), 95 MB (model size), 1.393 (FPS) | |||||||
| Hu et al. [ | Pine | – | RGB | Proposed approach (with Augmentor) | – | – | 46.4% (precision), 92.9% (recall), 61.9% (F1) | |
| Proposed approach (with DCGAN) | 56.5% (precision), 92.9% (recall), 70.3% (F1) | |||||||
| Hu et al. [ | Pine | – | RGB | AlexNet | 1486 | 64 | 39.1% (Recall) | |
| VGGNet | 91.3% (Recall) | |||||||
| Inception-v3 | 73.7% (F1), 61.8% (precision), 91.3% (Recall) | |||||||
| AlexNet + Adaboost | 71.2% (F1), 58.3% (precision), 91.3% (Recall) | |||||||
| VGGNet + Adaboost | 76.9% (F1), 69% (precision), 87% (Recall) | |||||||
| Proposed method | 86.3% (F1), 78.6% (precision), 95.7% (Recall) | |||||||
| Qin et al. [ | Pine | Pine Wood Nematode Disease | 150 - 200/0.1 - 0.125 | Multispectral | SCANet | 4862 | – | 79.33% (OA), 86% (precision), 91% (recall), 88.43% (F1) |
| DeepLabV3+ | 56.62% (OA), 68% (precision), 77% (recall), 72.22% (F1) | |||||||
| HRNet | 56.9% (OA), 75.66% (precision), 68.66% (recall), 72% (F1) | |||||||
| DenseNet | 54.7% (OA), 64.33% (precision), 76.66% (recall), 70% (F1) | |||||||
| Yu et al. [ | Pine | Pine Wilt Disease | 100/12 | Multispectral | Faster R-CNN | 1905 | 800 | 60.98% (mAP), 113.43 MB (model size), 10.51 (FPS) |
| YOLOv4 | 57.07% (mAP), 243.96 MB (model size), 25.55 (FPS) | |||||||
| Shi et al. [ | Potato | Late Blight Disease | 30/2.5 | Hyperspectral | CropdocNet | – | – | 98.2% (OA), 0.812 (Kappa), 721 ms (Computing Time) |
| SVM | 82.7% (OA), 0.571 (Kappa), 162 ms (Computing Time) | |||||||
| RF | 78.8% (OA), 0.615 (Kappa), 117 ms (Computing Time) | |||||||
| 3D-CNN | 88.8% (OA), 0.771 (Kappa), 956 ms (Computing Time) | |||||||
| Abdulridha et al. [ | Tomato | Target Spot | 30/0.1 | Hyperspectral | MLP | – | – | TS: 97% (accuracy) |
| Bacterial Spot | BS: 98% (accuracy) | |||||||
| Duarte-Carvajalino et al. [ | Potato | Late Blight | 30/0.008 | Multispectral | MLP (NIR-G-B) | 748,071 | 50 | 16.37 (MAE), 23.25 (RMSE), 0.47 (R2) |
| MLP (NDVI) | 18.71 (MAE), 21.98 (RMSE), 0.44 (R2) | |||||||
| MLP (Band Differences) | 13.23 (MAE), 16.28 (RMSE), 0.75 (R2) | |||||||
| MLP (PCA) | 16.60 (MAE), 21.87 (RMSE), 0.48 (R2) | |||||||
| SVR (Band Differences) | 17.34 (MAE), 21.06 (RMSE), 0.45 (R2) | |||||||
| RF (Band Differences) | 12.96 (MAE), 16.15 (RMSE), 0.75 (R2) | |||||||
| CNN (NIR-G-B) | 11.72 (MAE), 15.09 (RMSE), 0.74(R2) | |||||||
| Kerkech et al. [ | Vineyard | Esca | 25/0.01 | RGB | CNN + YUV + ExGR | 70,560 | (16 | 95.92% (accuracy) |
| Wiesner-Hanks et al. [ | Vineyard | Mildew disease | 25/– | Multispectral | VddNet | – | 256 | 93.72% (accuracy) |
| SegNet | 92.75% (accuracy) | |||||||
| U-Net | 90.69% (accuracy) | |||||||
| DeepLabv3+ | 88.58% (accuracy) | |||||||
| PSPNet | 84.63% (accuracy) | |||||||
| Raj et al. [ | Vineyard | Mildew disease (leaf-level) | 25/0.01 | RGB | SegNet | 105, 515 (RGB) | 360 | 85.13% (accuracy) |
| Infrared | 98, 895 (IR) | 78.72% (accuracy) | ||||||
| Fusion AND | 82.20% (accuracy) | |||||||
| Fusion OR | 90.23% (accuracy) | |||||||
| Mildew disease (grapevine-level) | RGB | 94.41% (accuracy) | ||||||
| Infrared | 89.16% (accuracy) | |||||||
| Fusion AND | 88.14% (accuracy) | |||||||
| Fusion OR | 95.02% (accuracy) | |||||||
| Wu et al. [ | Maize (Corn) | Northern Leaf Blight | 6/– | RGB | CNN (Resnet-34) | 6267 | 224 | 97.76% (accuracy), 97.85% (recall), 98.42% (precision) |
| Wiesner-Hanks et al. [ | Maize (Corn) | Northern Leaf Blight | 6/– | RGB | CNN (ResNet-34) + CRF | 18,222 | 224 | 99.79% (accuracy), 71.53 (F1) |
| Stewart et al. [ | Maize (Corn) | Northern Leaf Blight | 6/– | RGB | Mask R-CNN | 3000 | 512 | 96% at IoU = 0.5 (average precision) |
| Huang et al. [ | Wheat | Helminthosporium Leaf Blotch | 80/0.034 | RGB | Color Histogram + SVM | 246 | 100 | 85.92% (OA) |
| LBPH + SVM | 65.10% (OA) | |||||||
| VI + SVM | 87.65% (OA) | |||||||
| Color Histogram + LBPH + VI + SVM | 90% (OA) | |||||||
| CNN (LeNet) | 91.43% (OA) | |||||||
| Liu et al. [ | Wheat | Fusarium Head Blight | 60/0.04 | Hyperspectral | SVM | – | – | 95% (OA), 0.9 (Kappa) |
| RF | 95% (OA), 0.9 (Kappa) | |||||||
| BPNN | 98% (OA), 0.96 (Kappa) | |||||||
| Su et al. [ | Wheat | Yellow Rust | 20/0.013 | Multispectral | U-Net | 1600 | 256 | 91.3% (precision), 92.6% (recall), 92% (F1) |
| Zhang et al. [ | Wheat | Yellow Rust | 20/0.013 | Multispectral | Ir-UNet | – | 256 | 94.63% (precision), 95.15% (recall), 94.89% (F1), 97.13% (OA) |
| Pan et al. [ | Wheat | Yellow Rust | 30/0.007 | RGB | BPNN | 5580 | 256 | 86% (accuracy), 0.65 (Kappa) |
| FCN | 90% (accuracy), 0.81 (Kappa) | |||||||
| U-Net | 94% (accuracy), 0.89 (Kappa) | |||||||
| PSPNet | 98% (accuracy), 0.96 (Kappa) |
Fig. 3Deep learning-based plant disease identification workflow
List of the most used metrics to evaluate deep learning-based computer vision models
| Metric | Formula | Description |
|---|---|---|
| Accuracy | The accuracy is the most used evaluation technique that measures how many times the developed model made the correct prediction in the classification task. | |
| Recall (sensitivity) | The recall rate indicates how many misidentified plant disease types the developed model can predict. A better recall rate means lower false-negative predictions. | |
| Precision | The precision rate indicates how many of a certain disease class the developed model incorrectly diagnosed as another disease type. A better precision rate means lower false-positive predictions. | |
| F1-score (F-measure) | The F1-score metric represents the harmonic mean of precision and recall rates. | |
| Average Precision | The AP is a metric to evaluate the performance of an object detection model, which is calculated for each class. | |
| Mean Average Precision | The mAP is the average of AP over all classes. In the case when we have a single class, mAP and AP are the same metrics. | |
| Kappa coefficient | The Kappa coefficient is a statistic that measures inter-annotator agreement. | |
| Intersection over Union | The IoU measures the overlap between the ground truth area and the predicted area. | |
| FPS | The FPS is used to compute the detection speed. |