| Literature DB >> 35909759 |
Jun Liu1, Xuewei Wang1, Wenqing Miao1, Guoxu Liu2.
Abstract
Tomato plants are infected by diseases and insect pests in the growth process, which will lead to a reduction in tomato production and economic benefits for growers. At present, tomato pests are detected mainly through manual collection and classification of field samples by professionals. This manual classification method is expensive and time-consuming. The existing automatic pest detection methods based on a computer require a simple background environment of the pests and cannot locate pests. To solve these problems, based on the idea of deep learning, a tomato pest identification algorithm based on an improved YOLOv4 fusing triplet attention mechanism (YOLOv4-TAM) was proposed, and the problem of imbalances in the number of positive and negative samples in the image was addressed by introducing a focal loss function. The K-means + + clustering algorithm is used to obtain a set of anchor boxes that correspond to the pest dataset. At the same time, a labeled dataset of tomato pests was established. The proposed algorithm was tested on the established dataset, and the average recognition accuracy reached 95.2%. The experimental results show that the proposed method can effectively improve the accuracy of tomato pests, which is superior to the previous methods. Algorithmic performance on practical images of healthy and unhealthy objects shows that the proposed method is feasible for the detection of tomato pests.Entities:
Keywords: YOLO; image processing; object detection; pests identification; tomato
Year: 2022 PMID: 35909759 PMCID: PMC9326248 DOI: 10.3389/fpls.2022.814681
Source DB: PubMed Journal: Front Plant Sci ISSN: 1664-462X Impact factor: 6.627
FIGURE 1Network structure diagram of triplet attention (Song et al., 2018).
FIGURE 2Network structure diagram of the proposed model.
FIGURE 3The experimental step flow of the study.
FIGURE 4The experimental image acquisition site.
Information on tomato pest dataset.
| Class | Pests class | Labeling quantity |
| 1 | Whitefly | 6327 |
| 2 | Aphid | 5687 |
| 3 | Leafminer | 6912 |
| 4 | Other | 6679 |
FIGURE 5Examples of input images used in this study.
Configuration of an experimental platform.
| Server | CPU Processor: INTEL I7-9800X |
| GPU: GEFORCE GTX1080Ti | |
| Memory: The Kingston 32G DDR4 | |
| Software | Operating System: Ubuntu 18.04 |
| Language: Python | |
| GCC 7.3.0 | |
| CUDA 10.0.130 | |
| OpenCV 3.4.5 |
Among them, GPU acceleration was used for CUDA programming, and OpenCV was mainly used to display images during testing.
FIGURE 6Process of model training.
Comparison of training results of six models.
| Object detection algorithms | mAP | FPS |
| Faster R-CNN | 68.7 | 9 |
| SSD | 72.3 | 43 |
| YOLOv3 | 73.6 | 71 |
| YOLOv4 | 87.1 | 82 |
| The proposed algorithm | 93.4 | 83 |
Proportion of detection errors (%) for the six algorithms.
| Algorithms | Number of false checks | Misdetection rate/% |
| Faster R-CNN | 190 | 1.27% |
| SSD | 65 | 0.43% |
| YOLOv3 | 71 | 0.47% |
| YOLOv4 | 63 | 0.42% |
| The proposed algorithm | 54 | 0.36% |
Algorithmic performance on practical images of healthy and unhealthy objects.
| Pests class | AP (%) |
| Whitefly | 84.7 |
| Aphid | 83.9 |
| Leafminer | 62.7 |
| Other | 89.6 |
| mAP (%) | 78.1 |
FIGURE 7Detection effect of practical images of healthy and unhealthy objects.