| Literature DB >> 35157227 |
Asma Alzaid1, Alice Wignall2, Sanja Dogramadzi3, Hemant Pandit4,5, Sheng Quan Xie6,7.
Abstract
PURPOSE: Object classification and localization is a key task of computer-aided diagnosis (CAD) tool. Although there have been numerous generic deep learning (DL) models developed for CAD, there is no work in the literature to evaluate their effectiveness when utilized in diagnosing fractures in proximity of joint implants. In this work, we aim to assess the performance of existing classification systems on binary and multi-class problems (fracture types) using plain radiographs. In addition, we evaluated the performance of object detection systems using the one- and two-stage DL architectures.Entities:
Keywords: Bone fracture; Computer aided diagnostics; Deep learning; Medical imaging; Surgical planning
Mesh:
Year: 2022 PMID: 35157227 PMCID: PMC8948116 DOI: 10.1007/s11548-021-02552-5
Source DB: PubMed Journal: Int J Comput Assist Radiol Surg ISSN: 1861-6410 Impact factor: 2.924
Fig. 1The classification of PFFs according to VCS [30]
Fig. 2Illustration of the quality of X-ray images, fracture line appearance and the high variability of PFFs in X-ray images; image view, implant type and captured bone part
Fig. 3PFF classification approach: the examined classification network are (ResNet, DenseNet, VGG and Inception). The object detection network: FasterRCNN and RetinaNet
Fig. 4Comparison of the performance of Fracture/ no fracture classification
Fig. 5ROC curves for Fracture Types A,B and C and Normal class for each classification model
Precision, recall, F1-score and accuracy of PFFs classification. The highest metric average values across the four models are highlighted in bold for each metric
| VGG | Inception | Resnet50 | Densenet | |||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| A | B | C | Normal | Avg. | A | B | C | Normal | Avg. | A | B | C | Normal | Avg. | A | B | C | Normal | Avg. | |
| Precision | 0.50 | 0.71 | 0.83 | 0.80 | 0.71 | 0.50 | 0.77 | 0.83 | 0.85 | 0.73 | 0.71 | 0.76 | 0.86 | 0.83 | 0.63 | 0.78 | 0.81 | 0.84 | 0.76 | |
| Recall | 0.32 | 0.71 | 0.84 | 0.88 | 0.69 | 0.59 | 0.65 | 0.87 | 0.88 | 0.45 | 0.83 | 0.86 | 0.84 | 0.74 | 0.55 | 0.71 | 0.87 | 0.88 | ||
| F1 Score | 0.39 | 0.71 | 0.84 | 0.84 | 0.69 | 0.54 | 0.71 | 0.85 | 0.86 | 0.74 | 0.86 | 0.79 | 0.86 | 0.83 | 0.59 | 0.74 | 0.84 | 0.93 | 0.77 | |
| Specificity | 0.97 | 0.87 | 0.92 | 0.91 | 0.92 | 0.94 | 0.91 | 0.92 | 0.93 | 0.98 | 0.89 | 0.94 | 0.93 | 0.97 | 0.91 | 0.91 | 0.91 | 0.92 | ||
| Accuracy | 0.91 | 0.82 | 0.90 | 0.90 | 0.88 | 0.91 | 0.84 | 0.90 | 0.92 | 0.89 | 0.94 | 0.87 | 0.91 | 0.90 | 0.93 | 0.85 | 0.90 | 0.91 | ||
| Precision | 0.62 | 0.67 | 0.79 | 0.83 | 0.73 | 0.53 | 0.73 | 0.85 | 0.83 | 0.73 | 0.67 | 0.70 | 0.88 | 0.82 | 0.57 | 0.72 | 0.90 | 0.86 | 0.76 | |
| Recall | 0.36 | 0.75 | 0.81 | 0.83 | 0.69 | 0.41 | 0.77 | 0.86 | 0.83 | 0.72 | 0.45 | 0.83 | 0.83 | 0.80 | 0.36 | 0.83 | 0.83 | 0.89 | ||
| F1 score | 0.46 | 0.71 | 0.80 | 0.83 | 0.70 | 0.46 | 0.75 | 0.85 | 0.83 | 0.72 | 0.54 | 0.76 | 0.85 | 0.81 | 0.44 | 0.77 | 0.86 | 0.88 | ||
| Specificity | 0.98 | 0.84 | 0.91 | 0.93 | 0.91 | 0.96 | 0.88 | 0.93 | 0.93 | 0.92 | 0.98 | 0.85 | 0.95 | 0.93 | 0.97 | 0.86 | 0.96 | 0.94 | ||
| Accuracy | 0.92 | 0.82 | 0.88 | 0.90 | 0.88 | 0.92 | 0.85 | 0.91 | 0.90 | 0.89 | 0.93 | 0.84 | 0.91 | 0.89 | 0.89 | 0.92 | 0.85 | 0.92 | 0.92 | |
Fig. 8(a) the original x-ray images. (b) Resnet50 classification results with the CAM. The heat map color range from blue (minimum) to red (maximum). (c) fracture bounding box results of Faster RCNN (d) fracture bounding box results of RetinaNet (blue is the ground truth and red is the predicted box)
Fig. 6CAM-based fracture localization. (green box is the ground truth and red is the CAM result)
Precision, recall, and accuracy of PFFs detection (classification and localization)
| Faster RCNN | RetinaNet | |
|---|---|---|
| Precision | 80 | 31 |
| Recall | 98 | 97 |
| Accuracy | 78 | 31 |
Fig. 7Precision-Recall curve for Faster RCNN and RetinaNet