| Literature DB >> 35949704 |
Anjali Tiwari1, Murali Poduval2, Vaibhav Bagaria1.
Abstract
BACKGROUND: Deep learning, a form of artificial intelligence, has shown promising results for interpreting radiographs. In order to develop this niche machine learning (ML) program of interpreting orthopedic radiographs with accuracy, a project named deep learning algorithm for orthopedic radiographs was conceived. In the first phase, the diagnosis of knee osteoarthritis (KOA) as per the standard Kellgren-Lawrence (KL) scale in medical images was conducted using the deep learning algorithm for orthopedic radiographs. AIM: To compare efficacy and accuracy of eight different transfer learning deep learning models for detecting the grade of KOA from a radiograph and identify the most appropriate ML-based model for the detecting grade of KOA.Entities:
Keywords: Artificial intelligence; Computer vision; Deep learning; Knee; Machine leaning; Osteoarthritis
Year: 2022 PMID: 35949704 PMCID: PMC9244962 DOI: 10.5312/wjo.v13.i6.603
Source DB: PubMed Journal: World J Orthop ISSN: 2218-5836
Figure 1Architecture for deep learning algorithms for orthopedic radiographs.
Figure 2X-ray images of different Kellgren-Lawrence grades for knee osteoarthritis.
Data splits in training, testing and validation subsets according to Kellgren-Lawrence grades
|
|
|
|
| |||
|
|
|
|
|
|
| |
| 0 | 255 | 17.6 | 37 | 17.6 | 73 | 17.7 |
| 1 | 213 | 14.7 | 31 | 14.7 | 61 | 14.8 |
| 2 | 164 | 11.4 | 24 | 11.4 | 47 | 11.4 |
| 3 | 237 | 16.4 | 35 | 16.7 | 68 | 16.4 |
| 4 | 576 | 39.9 | 83 | 39.5 | 164 | 39.7 |
| Total | 1445 | 100 | 210 | 100 | 413 | 100 |
Figure 3Traditional machine learning vs transfer learning.
Figure 4Multimodal pipeline, predicting the risk of osteoarthritis for a particular knee. We first used a deep convolutional neural network, trained different models in a multitask setting to predict the current stage of osteoarthritis defined according to the Kellgren-Lawrence (KL) grade scale.
Evaluation of parameters for knee osteoarthritis detection
|
|
|
| Accuracy | Determines the accuracy of the standalone model inaccuracy to detect the presence of KOA and its classification in the input image |
| Precision | True positive/true positive + false positive |
| Recall | True positive/true positive + false negative |
| Loss | Determines the loss of the model |
KOA: Knee osteoarthritis.
Performance comparison of various transfer learning convolutional neural network models and eight expert human interpretations used for the development of deep learning algorithm for orthopedic radiographs
|
|
|
|
|
|
|
| ResNet50 | 54.29% | 61.03% | 39.52% | 1.06 | Average |
| VGG-16 | 56.68% | 67.56% | 35.02% | 1.10 | Average |
| InceptionV3 | 87.34% | 89.19% | 85.67% | 0.35 | Good |
| MobilnetV2 | 82.15% | 84.66% | 80.21% | 0.46 | Average |
| EfficientnetB7 | 56.61% | 70.09% | 38.27% | 0.98 | Average |
| DenseNet201 | 92.87% | 93.69% | 92.53% | 0.20 | Best |
| Xception | 82.81% | 85.03% | 77.05% | 0.50 | Average |
| NasNetMobile | 80.90% | 83.98% | 77.30% | 0.50 | Average |
| Surgeon | 74.22% | 79.50% | 50.00% | 0.25 | Good |
Figure 5Depicting the loss and accuracy Red line: Loss; Blue line: Accuracy; Y-axis: Depicting the loss and accuracy; X-axis: Number of epochs. A: DenseNet201; B: EffecinetNetB7; C: InceptionV3; D: MobileNetV2; E: NasNetMobile; F: ResNet 50; G: VGG-16; H: Xception.