| Literature DB >> 35408304 |
Elżbieta Kubera1, Agnieszka Kubik-Komar1, Paweł Kurasiński1, Krystyna Piotrowska-Weryszko2, Magdalena Skrzypiec3.
Abstract
Analysis of pollen material obtained from the Hirst-type apparatus, which is a tedious and labor-intensive process, is usually performed by hand under a microscope by specialists in palynology. This research evaluated the automatic analysis of pollen material performed based on digital microscopic photos. A deep neural network called YOLO was used to analyze microscopic images containing the reference grains of three taxa typical of Central and Eastern Europe. YOLO networks perform recognition and detection; hence, there is no need to segment the image before classification. The obtained results were compared to other deep learning object detection methods, i.e., Faster R-CNN and RetinaNet. YOLO outperformed the other methods, as it gave the mean average precision (mAP@.5:.95) between 86.8% and 92.4% for the test sets included in the study. Among the difficulties related to the correct classification of the research material, the following should be noted: significant similarities of the grains of the analyzed taxa, the possibility of their simultaneous occurrence in one image, and mutual overlapping of objects.Entities:
Keywords: deep neural networks; object detection; pollen monitoring
Mesh:
Year: 2022 PMID: 35408304 PMCID: PMC9002382 DOI: 10.3390/s22072690
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Examples of microscopic images of (A) Alnus, (B) Betula, and (C) Corylus pollen grains.
Figure 2Object detector is composed of input, backbone, neck, and head parts.
Figure 3Mean and standard deviation of mAP@.5:.95 for three runs of each model evaluated on four different test datasets.
Values of mAP@.5:.95 for all investigated models.
| TestAll | TestVisible | TestMix | TestDiff | ||
|---|---|---|---|---|---|
| YOLOv5l | ModelVis | 90.6% | 90.0% | 88.4% | 75.5% |
| 91.3% | 91.3% | 89.7% | 74.0% | ||
| 90.8% | 90.8% | 87.7% | 76.0% | ||
| ModelAll | 92.4% | 91.4% | 88.8% | 77.9% | |
| 90.7% | 90.7% | 91.2% | 82.5% | ||
| 91.4% | 91.4% | 90.8% | 81.0% | ||
| ModelAllVis | 90.3% | 89.4% | 89.0% | 80.1% | |
| 90.2% | 90.2% | 88.8% | 82.7% | ||
| 92.2% | 92.2% | 89.8% | 79.6% | ||
| ModelVisAll | 91.6% | 90.8% | 90.6% | 77.7% | |
| 89.7% | 89.7% | 86.8% | 75.1% | ||
| 89.6% | 89.6% | 88.3% | 76.6% | ||
| YOLOv5s | ModelVis | 90.3% | 90.3% | 89.0% | 75.7% |
| 90.8% | 90.7% | 88.2% | 74.4% | ||
| 89.6% | 89.6% | 88.0% | 74.0% | ||
| ModelAll | 90.0% | 88.7% | 88.8% | 81.2% | |
| 90.5% | 89.5% | 88.7% | 81.6% | ||
| 91.6% | 90.5% | 91.5% | 78.9% | ||
| ModelAllVis | 91.7% | 91.0% | 89.6% | 81.1% | |
| 91.4% | 90.7% | 90.3% | 78.7% | ||
| 91.9% | 91.0% | 89.6% | 81.4% | ||
| ModelVisAll | 91.5% | 91.4% | 89.6% | 73.5% | |
| 90.1% | 90.0% | 86.9% | 73.8% | ||
| 90.5% | 90.4% | 88.8% | 75.6% |
Figure 4Sample image from the testMix dataset with predicted bounding boxes: (A) ModelAll (YOLOv5s) with Betula grains incorrectly detected as Corylus grain; (B) ModelVisAll (YOLOv5s) with a correctly detected Betula grain.
Figure 5Sample image from the testDiff dataset containing only Betula grains with predicted bounding boxes: (A) ModelAll (YOLOv5s); (B) ModelVisAll (YOLOv5s).
Precision and recall averaged within three repetitions for the testAll, testVisible, and testMix datasets.
| TestAll | TestVisible | TestMix | |||||
|---|---|---|---|---|---|---|---|
| Model | Precision | Recall | Precision | Recall | Precision | Recall | |
| YOLOv5l | ModelVis | 94.7% | 97.2% | 94.1% | 97.6% | 94.4% | 90.0% |
| ModelAll | 96.0% | 97.7% | 95.4% | 97.4% | 93.9% | 91.2% | |
| ModelAllVis | 96.4% | 98.7% | 95.6% | 98.9% | 94.3% | 92.9% | |
| ModelVisAll | 94.7% | 95.9% | 93.9% | 96.0% | 91.7% | 91.7% | |
| YOLOv5s | ModelVis | 96.7% | 96.5% | 95.6% | 97.4% | 92.4% | 92.2% |
| ModelAll | 97.4% | 98.1% | 94.9% | 98.1% | 92.6% | 92.5% | |
| ModelAllVis | 97.8% | 97.0% | 96.0% | 98.1% | 92.8% | 90.5% | |
| ModelVisAll | 97.5% | 97.8% | 96.8% | 97.1% | 92.2% | 89.7% | |
| Minimum | 94.7% | 95.9% | 93.9% | 96.0% | 91.7% | 89.7% | |
| Maximum | 97.8% | 98.7% | 96.8% | 98.9% | 94.4% | 92.9% | |
Confusion matrix of predictions for the testMix dataset averaged for all investigated models.
| True | ||||
|---|---|---|---|---|
|
|
|
| ||
| predicted |
| 99.0% | 9.4% | 0.5% |
|
| 0.0% | 80.4% | 2.7% | |
|
| 0.9% | 10.2% | 96.1% | |
Comparison of selected YOLOv5, RetinaNet, and Faster R-CNN mAP@.5:.95 values of detection of pollen grains in the testAll dataset.
| YOLOv5s | YOLOv5l | RetinaNet | Faster R-CNN | |
|---|---|---|---|---|
| Run1 | 90.0% | 90.7% | 75.5% | 55.7% |
| Run2 | 90.5% | 91.4% | 81.1% | 44.2% |
| Run3 | 91.6% | 92.4% | 82.0% | 53.0% |
| average | 90.7% | 91.5% | 79.5% | 51.0% |