| Literature DB >> 30444906 |
Sven Koitka1,2,3, Aydin Demircioglu1, Moon S Kim1, Christoph M Friedrich2,4, Felix Nensa1.
Abstract
BACKGROUND: Detection of ossification areas of hand bones in X-ray images is an important task, e.g. as a preprocessing step in automated bone age estimation. Deep neural networks have emerged recently as de facto standard detection methods, but their drawback is the need of large annotated datasets. Finetuning pre-trained networks is a viable alternative, but it is not clear a priori if training with small annotated datasets will be successful, as it depends on the problem at hand. In this paper, we show that pre-trained networks can be utilized to produce an effective detector of ossification areas in pediatric X-ray images of hands. METHODS ANDEntities:
Mesh:
Year: 2018 PMID: 30444906 PMCID: PMC6239319 DOI: 10.1371/journal.pone.0207496
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1Example inference for a radiograph, highlighting all regions of interest using the final trained Faster-RCNN network.
Fig 2Randomly extracted patches of the annotated regions of interest.
The patches were extracted in square shape for better visualization.
Fig 3Analysis of the age distribution between both annotated sets.
The validation set contains slightly more older patients.
Fig 4Analysis of the annotation box sizes between both annotated sets.
The bounding box sizes were normalized to an maximum image edge length of 1024px, which is the default behaviour of the Faster-RCNN models in the Tensorflow Object Detection API.
Evaluation of object detection models for hand region detection using the Faster-RCNN InceptionResNetV2 pre-trained model.
Results are stated as mean and standard deviation of ten different training set splits. The evaluation is performed on the held-out set of 89 images.
| Split | AP@0.5IoU | mAP@0.5IoU | |||||
|---|---|---|---|---|---|---|---|
| DIP | PIP | MCP | Radius | Ulna | Wrist | ||
| 20% | 76.03 ± 11.59 | 79.50 ± 7.33 | 91.49 ± 2.41 | 92.37 ± 2.68 | 84.88 ± 2.45 | 97.51 ± 1.96 | 86.96 ± 2.91 |
| 40% | 83.77 ± 7.20 | 83.36 ± 5.34 | 93.07 ± 1.23 | 94.14 ± 3.25 | 84.81 ± 3.02 | 98.50 ± 0.76 | 89.51 ± 1.95 |
| 60% | 86.09 ± 8.15 | 84.99 ± 4.89 | 94.32 ± 1.99 | 95.96 ± 1.98 | 86.42 ± 4.49 | 98.71 ± 0.51 | 91.08 ± 1.89 |
| 80% | 87.35 ± 5.25 | 86.10 ± 6.21 | 93.25 ± 3.03 | 96.13 ± 1.49 | 85.27 ± 5.50 | 98.45 ± 0.58 | 91.09 ± 2.89 |
| 100% | 89.79 ± 5.10 | 88.29 ± 4.98 | 94.82 ± 1.45 | 97.96 ± 1.10 | 87.78 ± 3.24 | 98.87 ± 0.01 | 92.92 ± 1.93 |
Evaluation on central points of ROIs annotated by both a radiology expert and a non-expert.
Training was performed on the full set of 240, annotated by the non-expert, and evaluated on the held-out set of 89 images. Results are stated as mean and standard deviation of 10 runs.
| Label | Non-Expert | Expert | ||||
|---|---|---|---|---|---|---|
| Precision | Recall | F1-Score | Precision | Recall | F1-Score | |
| DIP | 99.13 ± 0.79 | 95.76 ± 1.81 | 97.41 ± 0.99 | 98.96 ± 0.74 | 95.59 ± 1.91 | 97.23 ± 1.07 |
| PIP | 98.57 ± 0.73 | 97.03 ± 0.51 | 97.79 ± 0.37 | 98.45 ± 0.75 | 96.92 ± 0.54 | 97.68 ± 0.41 |
| MCP | 98.70 ± 0.40 | 97.15 ± 0.71 | 97.92 ± 0.48 | 78.09 ± 1.73 | 76.85 ± 1.59 | 77.46 ± 1.64 |
| Radius | 99.55 ± 0.78 | 97.30 ± 1.77 | 98.40 ± 0.82 | 97.71 ± 1.17 | 95.51 ± 1.91 | 96.59 ± 1.16 |
| Ulna | 100.00 ± 0.00 | 91.12 ± 1.12 | 95.35 ± 0.61 | 95.32 ± 1.90 | 86.85 ± 1.76 | 90.89 ± 1.74 |
| Wrist | 98.07 ± 1.08 | 96.85 ± 1.16 | 97.46 ± 0.89 | 91.69 ± 3.50 | 90.56 ± 3.90 | 91.12 ± 3.66 |
| Average | 99.00 ± 0.35 | 95.87 ± 0.66 | 97.41 ± 0.40 | 93.37 ± 0.85 | 90.38 ± 1.31 | 91.85 ± 1.06 |