| Literature DB >> 32765973 |
Jose F Ruiz-Munoz1, Jyothier K Nimmagadda2, Tyler G Dowd3, James E Baciak2, Alina Zare1.
Abstract
PREMISE: High-resolution cameras are very helpful for plant phenotyping as their images enable tasks such as target vs. background discrimination and the measurement and analysis of fine above-ground plant attributes. However, the acquisition of high-resolution images of plant roots is more challenging than above-ground data collection. An effective super-resolution (SR) algorithm is therefore needed for overcoming the resolution limitations of sensors, reducing storage space requirements, and boosting the performance of subsequent analyses.Entities:
Keywords: convolutional neural networks; generative adversarial networks; plant phenotyping; root phenotyping; super resolution
Year: 2020 PMID: 32765973 PMCID: PMC7394708 DOI: 10.1002/aps3.11374
Source DB: PubMed Journal: Appl Plant Sci ISSN: 2168-0450 Impact factor: 1.936
Figure 3Examples of plant‐root images used to train super‐resolution models. (A, B) Arabidopsis thaliana and (C, D) wheat (Triticum aestivum) roots shown as RGB (A, C) and grayscale (B, D) images. (E, F) Barley (Hordeum vulgare) roots shown as RGB (E) and magnetic resonance (F) images.
Figure 1Stages of the super‐resolution (SR) experiments, showing the SR models (FSRCNN and SRGAN) and their constituent parts (left) and the segmentation model, SegRoot (right).
Figure 2Images demonstrating that the signal‐to‐noise ratio and the visual quality of an image are not always directly correlated.
Evaluation of super‐resolution models using a data set of soybean (Glycine max) root images. The signal‐to‐noise ratios (SNRs) and intersection over union (IoU) means are presented (standard error in parentheses).
| Model | SNR (SE) | IoU (SE) |
|---|---|---|
| Bicubic | 28.30 (1.37) | 0.0984 (0.0098) |
| FSRCNN‐DIV2K | 32.60 (0.19) | 0.1313 (0.0106) |
| FSRCNN‐91‐image |
| 0.1419 (0.0108) |
| FSRCNN‐roots |
| 0.1623 (0.0111) |
| FSRCNN‐91‐image&roots | 32.48 (0.19) |
|
| SRGAN‐DIV2K | 32.48 (0.19) | 0.1402 (0.0106) |
| SRGAN‐91‐image | 32.47 (0.19) | 0.1327 (0.0107) |
| SRGAN‐roots | 32.71 (0.19) | 0.1485 (0.0108) |
| SRGAN‐91‐image&roots | 32.66 (0.20) | 0.1536 (0.0108) |
| SRGAN‐MULDIS |
| 0.1415 (0.0108) |
| HR | — | 0.2003 (0.0122) |
Grayscale rows are lower and upper bounds (bicubic and high‐resolution, respectively). Boldfaced values correspond to the models that exhibited the highest performance.
Figure 4Super‐resolution and segmentation example images (128 × 64‐pixel size) from the soybean (Glycine max) data set. From top to bottom: (A) ground‐truth image, high‐resolution (HR) image, and segmentation on the HR image, (B) bicubic image and its segmentation, (C) output of the FSRCNN‐91‐image model and its segmentation, (D) output of the SRGAN‐MULDIS model and its segmentation, and (E) output of the FSRCNN‐91‐image&roots model and its segmentation.