| Literature DB >> 32158494 |
Helin Dutagaci1, Pejman Rasti1,2,3, Gilles Galopin2, David Rousseau1,2.
Abstract
BACKGROUND: The production and availability of annotated data sets are indispensable for training and evaluation of automatic phenotyping methods. The need for complete 3D models of real plants with organ-level labeling is even more pronounced due to the advances in 3D vision-based phenotyping techniques and the difficulty of full annotation of the intricate 3D plant structure.Entities:
Keywords: Database; Machine learning; Rosebush; Segmentation; X-ray
Year: 2020 PMID: 32158494 PMCID: PMC7057657 DOI: 10.1186/s13007-020-00573-w
Source DB: PubMed Journal: Plant Methods ISSN: 1746-4811 Impact factor: 4.993
Number of voxels in the models (also the number of points in the corresponding point cloud)
| Model ID | # Thresholded voxels | # Plant shoot voxels | # Plant shoot surface voxels |
|---|---|---|---|
| S268650 | 794,618 | 312,212 | 275,954 |
| S268660 | 588,101 | 157,029 | 127,158 |
| S270230 | 657,195 | 205,686 | 175,800 |
| S270240 | 642,192 | 169,276 | 142,474 |
| S270250 | 818,568 | 347,013 | 301,786 |
| S271780 | 2,091,739 | 305,534 | 264,634 |
| S271790 | 2,072,313 | 200,346 | 171,963 |
| S271800 | 2,011,882 | 164,108 | 138,065 |
| S273080 | 1,153,337 | 176,155 | 145,284 |
| S273090 | 1,909,986 | 192,755 | 166,246 |
| S273110 | 1,254,316 | 294,528 | 257,992 |
Fig. 1A sample rosebush model from the data set. The raw X-ray volume is thresholded and masked to obtain the solid part shown in a. Each voxel in the volume is annotated as leaf, stem, flower, pot, or tag to obtain the ground-truth segmentation as shown in b. In c only the parts corresponding to the plant shoot are shown, excluding the pot and the tag. The voxels corresponding only to stem class are shown in d
Percentages of voxels (points) for organ classes in the plant shoot
| Model ID | Leaf | Stem | Flower | Leaf on surface | Stem on surface | Flower on surface |
|---|---|---|---|---|---|---|
| S268650 | 79.06 | 13.08 | 7.86 | 83.99 | 9.43 | 6.58 |
| S268660 | 70.53 | 17.06 | 12.41 | 77.37 | 12.66 | 9.97 |
| S270230 | 77.07 | 14.36 | 8.57 | 83.44 | 10.40 | 6.17 |
| S270240 | 71.30 | 16.60 | 12.10 | 79.92 | 11.70 | 8.38 |
| S270250 | 75.22 | 12.33 | 12.45 | 80.64 | 8.93 | 10.43 |
| S271780 | 80.97 | 13.46 | 5.57 | 86.35 | 9.79 | 3.86 |
| S271790 | 75.76 | 13.96 | 10.28 | 81.26 | 10.12 | 8.62 |
| S271800 | 73.84 | 17.09 | 9.07 | 81.70 | 12.57 | 5.73 |
| S273080 | 69.20 | 21.72 | 9.08 | 77.50 | 15.99 | 6.51 |
| S273090 | 75.08 | 19.20 | 5.72 | 82.64 | 13.97 | 3.39 |
| S273110 | 79.91 | 17.07 | 6.02 | 83.78 | 12.27 | 3.95 |
3D vision based phenotyping methods
| Imaging | Plant type | Application/traits | Segmentation approach | |
|---|---|---|---|---|
| Dey et al. [ | Structure from motion | Grapevine | Classification of 3D points into leaves, branches, and fruit (red) | Eigenvalues of local covariance matrix, SVM, CRF |
| Li et al. [ | Structured light scanner | Anthurium, Dishlia, Dancing bean | Leaf/stem segmentation for tracking events in time like budding and bifurcation | Local point features, MRF |
| Paulus et al. [ | 3D laser scanner | Grapevine, Wheat | Leaf/stem segmentation for grapevine | Local point features (FPFH), SVM, Region growing |
| Paulus et al. [ | 3D laser scanner | Barley | Leaf/stem segmentation for leaf area and stem height estimation | Local point features (FPFH), SVM, Region growing |
| Wahabzada et al. [ | 3D laser scanner | Grapevine, Wheat, Barley | Segmentation of leaf, stem, ear, and fruit parts | Local point features (FPFH), clustering, Region growing |
| Sodhi et al. [ | Multi-view stereo & Kinect | Sorghum | Leaf/stem segmentation | Local point features (FPFH), SVM, CRF |
| Elnashef et al. [ | Multi-view stereo | Corn, Cotton, Wheat | Leaf/stem segmentation | Eigenvalues of the second tensor |
| Klodt et al. [ | Multi-view stereo | Barley | Leaf/stem segmentation for the estimation of volume and surface area of the plant and the number of leaves | Eigenvalues of the second-moments tensor |
| Goldbach et al. [ | Shape-from-silhouette | Tomato seedling | Leaf/stem segmentation for leaf length, width and area estimation | Breath-first flood-fill algorithm with a 26-connected neighbourhood |
| Hétroy-Wheeler et al. [ | Laser scanner | Tree seedlings | Segmentation of stems, leaves, and petioles for leaf surface area estimation | Graph construction, spectral embedding and clustering |
| Santos et al. [ | Structure from motion | Sunflower, soybean | Leaf/stem segmentation for leaf surface area estimation | Graph construction, spectral embedding and clustering |
| Binney and Sukhatme [ | 2D laser scanner | Tree branch | Segmentation of leaves and branches for estimation of branch locations, angles, radii, and lengths, and connectivity information between branches | Generative models for branches and branchpoints |
| Paproki et al. [ | Multi-view stereo | Cotton | Stem, petiole and leaf segmentation for estimation of geometric properties such as stem height, leaf height and inclination angle | Region growing, tubular shape-fitting, clustering |
| Chaivivatrakul et al. [ | Time of Flight | Corn | Leaf/stem segmentation for stem diameter, leaf length, area, and angle estimation | Stem extraction by ellipse fitting and linking, and elliptical cylinder extrusion |
| Gélard et al. [ | Structure from motion | Sunflower | Stem, petiole and leaf segmentation for leaf area estimation | Ring climbing for extraction of stems and petioles, clustering for segmenting leaves |
Performances of the baseline leaf/stem segmentation methods (%)
| Method | IoU | IoU | ||||
|---|---|---|---|---|---|---|
| LFPC-u | ||||||
| LFPC-s | ||||||
| LFVD | ||||||
| 3D U-Net |
Fig. 2Leaf and stem labels predicted by the baseline methods for a sample test rosebush. The rendering is in volumetric form for LFVD and 3D U-Net and in point cloud for LFPC-u and LFPC-s. The methods LFPC-u (a) and LFVD (c) produced smooth results, while the labels predicted by LFPC-s (b) are slightly noisy. 3D U-Net (d) wrongly classifies leaf borders as stems
Fig. 3Stem labels predicted by the baseline methods for a sample test rosebush. The rendering is in volumetric form for LFVD and 3D U-Net and in point cloud for LFPC-u and LFPC-s. With the methods LFPC-u (a) and LFVD (c) the predicted stem structure is mostly connected, while LFVD (c) misses some petiole portions. The noisy predictions produced by the method LFPC-s (b) are more visible here. 3D U-Net (d) classifies large portions of leaves as stems
Fig. 4Examples to erroneous predictions of the baseline methods highlighted with red ellipses. The LFPC-u method (a) can classify an entire leaf or a portion of a leaf, especially at leaf borders with low curvature. With the LFPC-s method (b) we can observe isolated noisy predictions along the stem and on the leaves. Most of the errors occur at the midribs. The LFVD method (c) misclassifies the stem points on the petioles, which are in between close leaflets. The 3D U-Net (d) classifies boundaries and thick portions of leaves as stems
Fig. 5Evolution of loss for training and validation data with training epochs for 3D U-Net