| Literature DB >> 35693164 |
Yan Zhang1, Shiyun Wa1, Longxiang Zhang2, Chunli Lv1.
Abstract
The detection of plant disease is of vital importance in practical agricultural production. It scrutinizes the plant's growth and health condition and guarantees the regular operation and harvest of the agricultural planting to proceed successfully. In recent decades, the maturation of computer vision technology has provided more possibilities for implementing plant disease detection. Nonetheless, detecting plant diseases is typically hindered by factors such as variations in the illuminance and weather when capturing images and the number of leaves or organs containing diseases in one image. Meanwhile, traditional deep learning-based algorithms attain multiple deficiencies in the area of this research: (1) Training models necessitate a significant investment in hardware and a large amount of data. (2) Due to their slow inference speed, models are tough to acclimate to practical production. (3) Models are unable to generalize well enough. Provided these impediments, this study suggested a Tranvolution detection network with GAN modules for plant disease detection. Foremost, a generative model was added ahead of the backbone, and GAN models were added to the attention extraction module to construct GAN modules. Afterward, the Transformer was modified and incorporated with the CNN, and then we suggested the Tranvolution architecture. Eventually, we validated the performance of different generative models' combinations. Experimental outcomes demonstrated that the proposed method satisfyingly achieved 51.7% (Precision), 48.1% (Recall), and 50.3% (mAP), respectively. Furthermore, the SAGAN model was the best in the attention extraction module, while WGAN performed best in image augmentation. Additionally, we deployed the proposed model on Hbird E203 and devised an intelligent agricultural robot to put the model into practical agricultural use.Entities:
Keywords: Generative Adversarial Networks; deep learning; detection network; leaf images; plant disease detection; transformer
Year: 2022 PMID: 35693164 PMCID: PMC9178295 DOI: 10.3389/fpls.2022.875693
Source DB: PubMed Journal: Front Plant Sci ISSN: 1664-462X Impact factor: 6.627
Figure 1Dataset visualization. (A,D,I) Images on solid color backgrounds. (B,C,G) Images via color channel variation. (E,F) Images from practical production scenes. (H) Electronic document image.
Figure 2Processing of removal of interferential leaf details.
Figure 3Demonstration of five date augmentation methods. (A) Mixup; (B) Mosaic; (C) CutMix; (D) CutOut.
Figure 4Structure of the Tranvolution detection network with GAN modules.
Algorithm of WGAN. α = 0.00005, c = 0.01, m = 64, n = 5.
| 1: |
| 2: |
| 3: |
| 4: |
| 5: |
| 6: |
| 7: Sample |
| 8: Sample |
| 9: |
| 10: ω←ω+α· |
| 11: ω← |
| 12: |
| 13: Sample |
| 14: |
| 15: θ←θ−α· |
| 16: |
Figure 5Flow chart of SAGAN.
Comparisons of different detection networks' performance (in %).
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| MobileNet | 416 × 416 | COCO | - | - | 32.8 Singh et al., | - |
| MobileNet | 416 × 416 | COCO + PVD | - | - | 22.4 Singh et al., | - |
| Faster-RCNN-Inception-ResNet | 416 × 416 | iNaturalist | - | - | 36.1 Singh et al., | - |
| Faster-RCNN-Inception-ResNet | 416 × 416 | COCO | - | - | 38.9 Singh et al., | - |
| SSD | 300 × 300 | COCO | 37.9 | 39.4 | 38.3 | 44 |
| FSSD | 300 × 300 | COCO | 39.7 | 36.3 | 37.6 | 39 |
| RefineDet | 300 × 300 | COCO | 34.4 | 38.3 | 35.9 | 43 |
| EfficientDet | 416 × 416 | COCO | 42.1 | 39.2 | 39.7 | 35 |
| YOLO v3 | 608 × 608 | COCO | 39.7 | 39.4 | 39.5 | 88 |
| YOLO v4 | 608 × 608 | COCO | 41.4 | 39.5 | 38.1 | 87 |
| YOLO v5 | 608 × 608 | COCO | 45.0 | 38.6 | 41.7 |
|
| Ours | 416 × 416 | COCO |
|
|
| 37 |
The red values annotate the evaluation metrics of our model and represent they are the best among all of the models.
Figure 6The ground truth in the dataset.
Figure 10The detection results of our model in the dataset.
Results of different implements of GAN modules (in %).
|
|
|
|
|
|
|---|---|---|---|---|
| No GAN (baseline) | 39.3 | 37.8 | 38.5 | 63 |
| WGAN + SAGAN | 51.7 | 48.1 | 50.3 | 37 |
| BAGAN + SAGAN | 49.8 | 49.1 | 49.3 | 37 |
| WGAN + SPA-GAN | 51.9 | 47.6 | 49.7 | 37 |
| BAGAN + SPA-GAN | 48.1 | 46.3 | 46.6 | 37 |
Figure 11Illustration of noise mask generated by different GAN models. (A) Feature maps generated by WGAN. (B) Feature maps generated by SAGAN.
The ablation experiment results from different pre-processing methods (in %).
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|
| ✓ | ✓ | ✓ | ✓ | 51.7 | 48.1 | 50.3 |
| ✓ | ✓ | ✓ | 51.2 | 48.9 | 50.5 | |
| ✓ | ✓ | ✓ | 50.4 | 48.3 | 49.8 | |
| ✓ | ✓ | ✓ | 50.4 | 48.4 | 49.8 | |
| ✓ | ✓ | ✓ | 51.7 | 48.2 | 50.3 |
Matrix multiplication algorithm.
| 1: |
| 2: |
| 3: n = A.rows |
| 4: create a new |
| 5: |
| 6: |
| 7: |
| 8: |
| 9: |
| 10: |
Strassen algorithm.
| 1: |
| 2: |
| 3: n = A.rows |
| 4: create a new |
| 5: |
| 6: |
| 7: |
| 8: devide matrix A into r sub-matrices |
| 9: devide matrix B into r sub-matrices |
| 10: devide matrix C into r sub-matrices |
| 11: |
| 12: |
| 13: |
| 14: |
| 15: |
Figure 12Intelligent agricultural robot, with infrared distance measurement and multiple cameras deployed on the bottom.
Comparisons of our model performance on different leaf size sub-dataset (in %).
|
|
|
|
|
|---|---|---|---|
| Small | 38.9 | 32.7 | 47.8 |
| Medium | 73.8 | 67.5 | 70.6 |
| Large | 95.1 | 88.3 | 89.4 |