| Literature DB >> 29904135 |
Adedotun Akintayo1, Gregory L Tylka2, Asheesh K Singh3, Baskar Ganapathysubramanian1, Arti Singh4, Soumik Sarkar5.
Abstract
In order to identify and control the menace of destructive pests via microscopic image-based identification state-of-the art deep learning architecture is demonstrated on the parasitic worm, the soybean cyst nematode (SCN), Heterodera glycines. Soybean yield loss is negatively correlated with the density of SCN eggs that are present in the soil. While there has been progress in automating extraction of egg-filled cysts and eggs from soil samples counting SCN eggs obtained from soil samples using computer vision techniques has proven to be an extremely difficult challenge. Here we show that a deep learning architecture developed for rare object identification in clutter-filled images can identify and count the SCN eggs. The architecture is trained with expert-labeled data to effectively build a machine learning model for quantifying SCN eggs via microscopic image analysis. We show dramatic improvements in the quantification time of eggs while maintaining human-level accuracy and avoiding inter-rater and intra-rater variabilities. The nematode eggs are correctly identified even in complex, debris-filled images that are often difficult for experts to identify quickly. Our results illustrate the remarkable promise of applying deep learning approaches to phenotyping for pest assessment and management.Entities:
Mesh:
Substances:
Year: 2018 PMID: 29904135 PMCID: PMC6002363 DOI: 10.1038/s41598-018-27272-w
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Approach overview showing the workflow that leads to the automated quantification process as an alternative to the human expert quantification which suffers from intra-rater and inter-rater variabilities.
Figure 2Deep Convolutional Selective Autoencoder architecture for rare object detection from images with application to SCN egg detection in cluttered microscopic images.
Figure 3Sample detection results with images from diverse environment and different background staining for a: (I) set B1 example, (II) set B2 example, (III) set B13 example, and (IV) set B9 example; the dark purple boxes indicate highly confident detection, the light purple box indicates the low confidence detections, the red box indicates occluded eggs and the green box is a false alarm, while the low intensity in reconstruction suggests low detection confidence of the model.
Figure 4Statistics of transfer learning for spring 2016 image frames (total of 2400) testing results with model trained on fall 2015 (total of 644) frames for, (a) ‘less-cluttered’ group containing 24 sets labeled {A1, …, A24} of 50 frames, (b) ‘high-cluttered’ group containing 24 sets labeled {B1, …, B24} of 50 frames. The error bars are derived (as +/−5%) around the human counts for each of the image sets and (c) the distributions of the machine and human counts for the ‘high-cluttered’ and ‘less-cluttered’ results.
Performance metrics of algorithm on spring 2016 test sets, which are the ‘less-cluttered’ 24 sets labeled {A1, …, A24} having 50 frames, ‘high-cluttered’ group containing 24 sets labeled {B1, …, B24} of 50 frames and the aggregate performance of all the total 2400 testing images.
| Adapted metric | Formulae | ‘Less-cluttered’ group | ‘High-cluttered’ group | Aggregate |
|---|---|---|---|---|
| Average detection accuracy |
|
|
|
|
| Average alarm-to-egg ratio |
|
|
|
|
| Average miss-to-egg ratio |
|
|
|
|
| Average precision |
|
|
|
|
| F1- Score |
| 0.943 | 0.949 | 0.944 |
Error margin is found by taking 5% (i.e., the upper bound of the error bar) of the total human count for all image sets in each category.
Figure 5MATLAB GUI development of the mobile app that annotating the highlighted image of (a) based on expert information and producing the result in (b).
Figure 6Convolutional selective autoencoder training architecture with a key describing the abbreviations in the architecture and image transformations implemented by the named core layers of the convolutional autoencoder.
Figure 7Patch-wise frames selected from video of the progression of an SCN egg example. Selectivity is seen to have superior properties than the pixel-wise semantic segmentation for solving similarity problem between the SCN eggs and debris.
Figure 8Algorithm implementation highlighting the workflow for training (at the top level) the network (dashed box) in forward activation and backpropagation and testing (at the bottom level) the trained network (dashed box) on an unseen test frame.