| Literature DB >> 29323201 |
Rémy Vandaele1, Jessica Aceto2, Marc Muller2, Frédérique Péronnet3, Vincent Debat4, Ching-Wei Wang5, Cheng-Ta Huang5, Sébastien Jodogne6, Philippe Martinive6, Pierre Geurts7, Raphaël Marée7.
Abstract
The detection of anatomical landmarks in bioimages is a necessary but tedious step for geometric morphometrics studies in many research domains. We propose variants of a multi-resolution tree-based approach to speed-up the detection of landmarks in bioimages. We extensively evaluate our method variants on three different datasets (cephalometric, zebrafish, and drosophila images). We identify the key method parameters (notably the multi-resolution) and report results with respect to human ground truths and existing methods. Our method achieves recognition performances competitive with current existing approaches while being generic and fast. The algorithms are integrated in the open-source Cytomine software and we provide parameter configuration guidelines so that they can be easily exploited by end-users. Finally, datasets are readily available through a Cytomine server to foster future research.Entities:
Mesh:
Year: 2018 PMID: 29323201 PMCID: PMC5765108 DOI: 10.1038/s41598-017-18993-5
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1Sample image and corresponding landmarks for each dataset. CEPHA (left) with 19 landmarks, DROSO (top right) with 15 landmarks and ZEBRA (bottom right) with 25 landmarks.
Figure 2On the left, illustration of multi-resolution features representing one pixel (on the DROSO dataset, with D = 6 windows). The corresponding described pixel is located at the center of the windows (in blue). On the right, illustration of R and R radius (on the ZEBRA dataset). Observations in the R radius are considered as landmarks (positive) for the classification approach. At training, PπR2 non-landmark observations are extracted in the [R, R] radius.
Figure 3In red, the position of landmark 8 as observed in all the images of the ZEBRA dataset, overlaid on an image. In blue, the position of the corresponding 30,000 examples extracted during prediction according to our sampling strategy. In yellow, the real landmark position.
Description and default values of our method’s parameters at validation.
| Parameter | Description | Default Value |
|---|---|---|
|
| The size of the multi-resolution window | 8 |
|
| The distance to the landmark position determining the training pixel output class | 15 (CEPHA) |
|
| The spacing between the landmarks extracted inside the | 2 |
|
| the maximal distance to the interest point to extract non-landmark observations | 600 (CEPHA) |
|
| The ratio of negative versus positive examples sampled during training | 1 (CEPHA) |
|
| The number of pixels randomly extracted during prediction | 30000 |
|
| The number of rotated versions of each training images that are introduced in the dataset | 3 |
|
| The maximal rotation angle (in degree) | 30 |
|
| The number of resolutions introduced in the feature representation of each window | 5 |
|
| The number of trees | 50 |
|
| The feature type used to describe the windows | RAW |
Figure 4Influence of the parameters of our algorithm.
Figure 5Influence of the use of the window descriptor features using classification. Above: the mean error using each window descriptor on each dataset. Below: the time needed for extracting 10,000 observations using each window descriptor. Grey bars represent the results obtained by using only the best resolution found during a 10-cross validation process.
Figure 6Comparison of our algorithm with Lindner et al.[15] (LC) and Donner et al.[14] (DMBL) on our three datasets. Error bars corresponds to 95%.
Figure 7Comparison with 2014 and 2015 ISBI Cephalometric X-Ray Challenge best results.
Comparison of the time and memory consumption of the algorithms. N is the number of landmarks.
| Our algorithm | Lindner | Donner | Ibragimov | |
|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
| RAW, SUB, SURF, HAAR-LIKE | HAAR-LIKE | GAUSSIAN SUB | HAAR-LIKE |
|
| Extremely randomized trees | Random forests (regression) | Extremely Randomized Trees | Random forests (classification) |