| Literature DB >> 34069411 |
Elżbieta Kubera1, Agnieszka Kubik-Komar1, Krystyna Piotrowska-Weryszko2, Magdalena Skrzypiec3.
Abstract
The risk of pollen-induced allergies can be determined and predicted based on data derived from pollen monitoring. Hirst-type samplers are sensors that allow airborne pollen grains to be detected and their number to be determined. Airborne pollen grains are deposited on adhesive-coated tape, and slides are then prepared, which require further analysis by specialized personnel. Deep learning can be used to recognize pollen taxa based on microscopic images. This paper presents a method for recognizing a taxon based on microscopic images of pollen grains, allowing the pollen monitoring process to be automated. In this research, a deep CNN (convolutional neural network) model was built from scratch. Publicly available deep neural network models, pre-trained on image data (not including microscopic pictures), were also used. The results show that even a simple deep learning model produces quite good results when the classification of pollen grain taxa is performed directly from the images. The best deep learning model achieved 97.88% accuracy in the difficult task of recognizing three types of pollen grains (birch, alder, and hazel) with similar structures. The derived models can be used to build a system to support pollen monitoring experts in their work.Entities:
Keywords: classification; deep neural networks; pollen monitoring
Mesh:
Year: 2021 PMID: 34069411 PMCID: PMC8159113 DOI: 10.3390/s21103526
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Examples of microscope images of pollen grains of the studied taxa: (a) Alnus; (b) Betula; (c) Corylus.
Figure 2SimpleModel architecture.
Figure 3The sets of convolutional filters of size 4 × 4 after 100 epochs of training SimpleModel: (a) for the first layer; (b) for the second layer; (c) for the third layer.
Figure 4SimpleModel accuracy and loss over the training time.
Classification results for the transfer learning models.
| Model | Feature Extraction | Fine-Tuning | |
|---|---|---|---|
| Type1 | Orig_AlexNet | 90.62% | 96.09% |
| Orig_ResNet | 90.62% | 98.44% | |
| Orig_VGG | NA 1 | 97.66% | |
| Orig_SqueezeNet | NA 1 | 99.22% | |
| Orig_DenseNet | NA 1 | 99.22% | |
| Orig_InceptionV3 | NA 1 | 99.22% | |
| Type2 | ScratchPollen13K_AlexNet | 51.56% | 85.94% |
| ScratchPollen13K_ResNet | 69.53% | 92.19% | |
| Type3 | FinetunedPollen13K_AlexNet | 88.28% | 94.53% |
| FinetunedPollen13K_ResNet | 89.84% | 98.44% |
1 No experiment was conducted.
Figure 5Training time for the individual models.
Results of 3 × 10CV of the models pre-trained on ImageNet images and fine-tuned on ABCPollen microscopic images.
| Model | Average Accuracy | Std Dev of Accuracy |
|---|---|---|
| Orig_AlexNet | 91.78% | 0.00653 |
| Orig_ResNet | 97.61% | 0.00260 |
| Orig_VGG | 97.88% | 0.00173 |
| Orig_SqueezeNet | 97.21% | 0.00201 |
| Orig_DenseNet | 97.71% | 0.00357 |
| Orig_InceptionV3 | 97.49% | 0.00387 |