| Literature DB >> 33725183 |
Cedar Warman1,2, John E Fowler3.
Abstract
KEY MESSAGE: Advances in deep learning are providing a powerful set of image analysis tools that are readily accessible for high-throughput phenotyping applications in plant reproductive biology. High-throughput phenotyping systems are becoming critical for answering biological questions on a large scale. These systems have historically relied on traditional computer vision techniques. However, neural networks and specifically deep learning are rapidly becoming more powerful and easier to implement. Here, we examine how deep learning can drive phenotyping systems and be used to answer fundamental questions in reproductive biology. We describe previous applications of deep learning in the plant sciences, provide general recommendations for applying these methods to the study of plant reproduction, and present a case study in maize ear phenotyping. Finally, we highlight several examples where deep learning has enabled research that was previously out of reach and discuss the future outlook of these methods.Entities:
Keywords: Computer vision; Deep learning; Neural network; Phenotyping; Reproduction
Mesh:
Year: 2021 PMID: 33725183 PMCID: PMC8128740 DOI: 10.1007/s00497-021-00407-2
Source DB: PubMed Journal: Plant Reprod ISSN: 2194-7953 Impact factor: 3.767
Fig. 1Leveraging deep learning for high-throughput phenotyping in plant reproduction. a Potential imaging targets for high-throughput phenotyping of reproductive systems can range from the whole plant to individual organs to microscopic structures, both in vivo and in vitro. b General deep learning strategies for image analysis. In classification, the class of each image is described, here germinated versus ungerminated pollen. When a single image contains more than one object, object detection can be used, a method that identifies objects and classes by bounding boxes. Semantic segmentation identifies the class of objects in an image on a pixel level, allowing for the identification of object attributes like shape and area. As with semantic segmentation, instance segmentation identifies pixel classes. In addition, instance segmentation differentiates multiple instances of the same object class that are touching or overlapping. c Conceptual steps for implementing deep learning models
Fig. 2Maize ear phenotyping as an example deep learning workflow. a First, a rotational scanning system creates a flat projection of the surface of an ear. Fluorescent kernel markers are visible in this projection, signifying the presence of a genetically engineered transposable element insertion in a gene of interest. The ratio of fluorescent (mutant) kernels to non-fluorescent (wild-type) kernels can be tracked to screen for non-Mendelian inheritance of the mutant alleles. b Next, a training set of 300 projections with manually assigned bounding boxes labeling each kernel (corners marked by green circles) was generated. A transfer learning approach and the Tensorflow Object Detection API was then used to create a model based on the training dataset. c Model inference on the independent test set generates bounding boxes predicting the locations of the objects of interest in the image. Blue boxes signify non-fluorescent kernels and green boxes signify fluorescent kernels. d A comparison between model predictions and manual counts for fluorescent and non-fluorescent kernels (160 ear projections) was used to validate the model