| Literature DB >> 35336650 |
Jia Yao1,2, Yubo Wang1,2, Ying Xiang1,2, Jia Yang3, Yuhang Zhu4, Xin Li1,2, Shuangshuang Li1,2, Jie Zhang1,2, Guoshu Gong4.
Abstract
The prevention and management of crop diseases play an important role in agricultural production, but there are many types of crop diseases and complex causes, and their prevention and identification add difficulties to the process. The traditional methods of identifying diseases mostly rely on human visual and manual inspection, which requires a certain amount of expert knowledge and experience. There are shortcomings such as strong subjectivity and low accuracy. This paper takes the common diseases of kiwifruit as the research object. Based on deep learning and computer vision models, and given the influence of a complex background in actual scenes on the detection of diseases, as well as the shape and size characteristics of diseases, an innovative method of target detection and semantic segmentation was proposed to identify diseases accurately. The main contributions of this research are as follows: We produced the world's first high-quality dataset on kiwifruit. We used the target detection algorithm YOLOX, we stripped the kiwi leaves from the natural background and removed the influencing factors existing in the complex background. Based on the mainstream semantic segmentation networks UNet and DeepLabv3+, the experimental results showed that the ResNet101 network achieved the most effective results in the identification of kiwi diseases, with an accuracy rate of 96.6%. We used the training method of learning rate decay to further improve the training effect without increasing the training cost. After experimental verification, our two-stage disease detection algorithm had the advantages of high accuracy, strong robustness, and wide detection range, which provided a more efficient solution for solving the problem of precise monitoring of crop growth environment parameters.Entities:
Keywords: computer vision; deep learning; kiwifruit disease detection; smart agriculture
Year: 2022 PMID: 35336650 PMCID: PMC8949144 DOI: 10.3390/plants11060768
Source DB: PubMed Journal: Plants (Basel) ISSN: 2223-7747
Figure 1The original picture in the dataset.
Figure 2YOLOX network structure diagram.
Figure 3DeepLabv3+ network structure diagram, where, .., … are the symbol of omission.
Figure 4The module of axial.
Figure 5Overall processing flow of the network.
Figure 6The mAP of training.
Figure 7The loss of a valid dataset.
Figure 8The difference between predict values and real values.
Figure 9Comparison of loss before and after improvement. (a) Loss of DeepLabv3+. (b) Loss of our DeepLabv3+.
Figure 10Comparison of performance.
U-net series model comparison.
| Method | Loss Function | Learning Rate Decay | Test Accuracy | ||
|---|---|---|---|---|---|
| Dice Loss | Focal Loss | Cross Entropy Loss | |||
| UNet | √ | 0.951 | |||
| Attention UNet | √ | 0.950 | |||
| UNet++ | √ | 0.953 | |||
| UNet++ | √ | 0.952 | |||
| UNet++ | √ | 0.953 | |||
| UNet++ | √ | Cosine Decay | 0.954 | ||
| UNet++ | √ | Noisy linear cosine decay | 0.956 | ||
Where √ means we used this strategy, blank means we don’t used.
Comparison of training skills of DeepLab series models.
| Method | Loss Function | Learning Rate Decay | Test Accuracy | ||
|---|---|---|---|---|---|
| Dice Loss | Focal Loss | Cross Entropy Loss | |||
| DeepLabV1 | √ | 0.943 | |||
| DeepLabV2 | √ | 0.951 | |||
| DeepLabV3 | √ | 0.955 | |||
| DeepLabV3+ | √ | 0.956 | |||
| DeepLabV3+ | √ | 0.957 | |||
| DeepLabV3+ | √ | 0.956 | |||
| DeepLabV3+ | √ | Noisy linear cosine decay | 0.959 | ||
Where √ means we used this strategy, blank means we don’t used.
Comparison of DeepLabv3 + model optimization.
| Base Model | Backbone | Attention Gates | Test Accuracy | ||
|---|---|---|---|---|---|
| Xception | MobileNet | ResNet101 | |||
| DeepLabV3+ | √ | 0.954 | |||
| DeepLabV3+ | √ | 0.959 | |||
| DeepLabV3+ | √ | 0.961 | |||
| DeepLabV3+ | √ | √ | 0.966 | ||
Where √ means we used this strategy, blank means we don’t used.