| Literature DB >> 35328164 |
Sozan Mohammed Ahmed1, Ramadhan J Mstafa1,2.
Abstract
Knee osteoarthritis (KOA) is a degenerative joint disease, which significantly affects middle-aged and elderly people. The majority of KOA is primarily based on hyaline cartilage change, according to medical images. However, technical bottlenecks such as noise, artifacts, and modality pose enormous challenges for an objective and efficient early diagnosis. Therefore, the correct prediction of arthritis is an essential step for effective diagnosis and the prevention of acute arthritis, where early diagnosis and treatment can assist to reduce the progression of KOA. However, predicting the development of KOA is a difficult and urgent problem that, if addressed, could accelerate the development of disease-modifying drugs, in turn helping to avoid millions of total joint replacement procedures each year. In knee joint research and clinical practice there are segmentation approaches that play a significant role in KOA diagnosis and categorization. In this paper, we seek to give an in-depth understanding of a wide range of the most recent methodologies for knee articular bone segmentation; segmentation methods allow the estimation of articular cartilage loss rate, which is utilized in clinical practice for assessing the disease progression and morphological change, ranging from traditional techniques to deep learning (DL)-based techniques. Moreover, the purpose of this work is to give researchers a general review of the currently available methodologies in the area. Therefore, it will help researchers who want to conduct research in the field of KOA, as well as highlight deficiencies and potential considerations in application in clinical practice. Finally, we highlight the diagnostic value of deep learning for future computer-aided diagnostic applications to complete this review.Entities:
Keywords: bone segmentation; deep learning; knee osteoarthritis; machine learning; segmentation
Year: 2022 PMID: 35328164 PMCID: PMC8946914 DOI: 10.3390/diagnostics12030611
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1Methods of classifying the knee bones [12].
Figure 2Knee bone segmentation has benefits over other tissues because of its location and anatomical size. (a) Illustrates the MR image of a knee joint—patella, femur, and tibia bones, readily apparent with the accompanying cartilage surfaces. TB = tibia bone, FB = femoral bone, PC = patellar cartilage, FC = femoral cartilage, TC= tibia cartilage. (b) Shows segmented tibia (TB) and femur (FB), which usually have better demarcation [19].
Figure 3Illustrates the search with ASM for the face. (a) In the case of a point being near the target; (b) shows how the ASM can break down if the starting position is too far from the target [27].
Figure 4An example illustrates both ROI detection failures/recoveries and leak detection and correction. (a) An MRI image; (b) the ROI block is detected with only the femur bone detected, not the tibia; (c) after lowering the ROI detection threshold, both bones are detected; (d) mask for the GC output; (e) after morphological processes; (f) the resulting two potential skeletons, with a leak seen in the tibia bone; (g) the tibia bone has a leak that connects fat and other tissues to the tibia; (h) initial step in detecting a leak is to use a morphological opening; (i) residual content resulting from subtracting (h) from (g). (j) Following an examination of the remains in (i), the leak detection method identifies a leak and decides that only the pixels in the leak are affected; (j) are relevant to the tibia (k) after adding the appropriate pixels in (j) to (h), resulting in a leak-free tibia (i), the femoral mask (l) and (m). After applying the morphological aperture to check for leakage (n) the remaining pixels after subtracting (m) from (i). On this basis, it is concluded that there is no leak, and the pixels are reinserted (o). (o) femur and tibia masks as a result (p) GC segmentation in white and manual segmentation in yellow determined with DICE = 0.95 and 0.96 resolution for femur and tibia bones, respectively [45].
Figure 5One sample slice’s bone segmentation in coronal view. (a) Original image; (b) multi-atlas-based spatial prior; (c) segmentation result; (d) expert segmentation [49].
Figure 6Analysis of the results of a general region-growing algorithm: (a) Original image (red arrow points to things (teeth and auris) to be segmented); (b) segmentation results from region growing using parameters r = 30; (c) results obtained using the robust split-and-merge algorithm; (d) the results showed that the edges of the images are more exact and smoother [53].
Figure 7A typical system of machine learning [61].
Figure 8Segmentation findings for MR images using the hybrid SVM-DRF model with five types of feature vectors [68].
Figure 9Knee bone segmentation using (a) classical machine learning and (b) deep learning. Classic machine learning feature architecture includes hand-picked representations and mapping, while deep learning uses multiple hidden layers to extract representations of hierarchical features [76].
Figure 10Explanation of the difference between a linear regression model and a simple learning model: (a) linear model regression model; (b) simplified deep learning model [77].
Figure 11Process of segmenting medical images using deep learning [78].
Figure 12The SegNet CNN architecture is depicted in this diagram. SegNet is made up of two networks: an encoder and a decoder. This network’s final output is high-resolution pixel-by-pixel tissue categorization [83].
Figure 13Schematic representation of the workflow of [87] approach.
Figure 14Demonstration of the number of papers reviewed for each method in KOA studies.
Summary of automatic knee bone segmentation based on deformable, graph, atlas, miscellaneous models.
| Ref. | Year | Segmentation Technique | No. of Samples | Sequence Type | Region of Interest | Metric |
|---|---|---|---|---|---|---|
| [ | 2007 | ASM-SSM | 20 health samples | FS SPGR | Femur/Tibia | DSC: 0.96(FB); 0.96(TB) and 0.89 (PB) |
| [ | 2010 | AAM | 80 subjects | DESS | Femur/Tibia | AvgD:0.88 (±0.24) (FB); 0.74 (±0.21) (TB), RMSD: 1.49 (±0.44) (FB); 1.21 (±0.34) (TB) |
| [ | 2010 | ASM-AAM | 40 clinical MRI samples | T1 weighted SPGR | Femur/Tibia | AvgD:1.02 (±0.22) (FB); 0.84 (±0.19) (TB), RMSD: 1.54 (±0.30) (FB); 1.24 (±0.28) (TB) |
| [ | 2011 | SSM | 40 clinical samples | CTF | Femur/Tibia | For single-object (SSM) DICE: 0.94 (±0.02) (FB); 0.86 (±0.10) (TB) |
| [ | 2013 | AAM | 178 samples | Sagittal 3-D double-echo | Femur/Tibia | Odds ratio 12.5 [95% CI 4.0–39.3] for (K/L grade of 0) and [95% CI] 1.8–5.0, |
| [ | 2010 | LOGISMOS | 69 images | 3D DESS WE | Femur/Tibia/Patella | DSC ± SD: 0.84 ± 0.04(FC);0.80 ± 0.04 (TC); 0.80 ± 0.04 (PC) |
| [ | 2011 | Graph cuts | 376 images | T2-weighted | Femur/Tibia | DSC: 0.936 (FB); 0.946 (TB); |
| [ | 2009 | Graph cuts | 8 images | DESS | Femur/Tibia/Patella | DSC: 0.961 (FB); 0.857 |
| [ | 2010 | Graph cuts | 30 images | T2 sagittal map | Femur/Tibia | Zijdenbos Similarity Index (ZSI) for Avg 95%; Std 0.028; Median 0.96; Min 0.87; Max 0.98. |
| [ | 2020 | Graph cuts | 65 slices | T1 sequence | Femur/Tibia | Mean Square Error (MSE): 0.19 |
| [ | 2014 | Multi-atlas | 100 training; | T1 weighted GRE FS | Femur/Tibia | ASD ± SD: 0.63 ± 0.17 mm (FB); |
| [ | 2015 | Multi-atlas, KNN | The samples from CCBR OAI and SKI10 were used | T1 weighted Turbo 3D | Tibia | DSC ± SD (training): 0.975 ± 0.010 (TB) |
| [ | 2011 | Ray casting | 161 samples | GRE FS | Femur/Tibia | DSC ± SD: 0.94 ± 0.05 (FB);0.92 ± 0.07 (TB) |
| [ | 2007 | Region growing; Level set | 2 samples | T1 weighted | Femur/Tibia/Patella | Sens: 97.05% (FB); 96.95%(TB); 92.69% (PB) Spec: 98.79% (FB); 98.33%(TB); |
| [ | 2017 | Level set; predefined | 8 samples | DESS | Femur/Tibia | DSC ± SD: 90.28 ± 2.33% |
| [ | 2005 | FLoG edge detector; | 40 samples | GE Signa Horizon | Femur/Patella | The results show that the proposed method can segment the femur and patella robustly even under bad imaging conditions. |
Summary of deep learning and machine learning methods for studying knee bone segmentation and classification.
| Ref | Year | Data | Dataset | Feature Engineering | Learning Algorithm | Validation | Results |
|---|---|---|---|---|---|---|---|
| [ | 2019 | X-ray | OAI | ICA | Random forest; Naïve Bayes | Leave-One-Out (LOO) | 87.15% sensitivity; 82.98% accuracy and up to 80.65% for specificity |
| [ | 2017 | MRI | From hospital | GLCM | SVM with the linear kernel; SVM with RBF kernel; SVM with polynomial kernel | 147 images training | 95.45% accuracy; 95.45% accuracy; 87.8% accuracy |
| [ | 2018 | MRI | OAI | PCA | SVM | 10-fold cross-validation | For JSL grade prediction the best performance was achieve for random forest AUC = 0.785 and F-measure = 0.743, while for the ANN with AUC = 0.695 and F-measure = 0.796. |
| [ | 2016 | MRI | OAI | k-means clustering; Neighborhood approximation forests | LOGISMOS | 108 baseline MRIs and 54 patients’ 12-month follow-up scans | 4D cartilage surface positioning errors (in millimeters) |
| [ | 2018 | Pain scores and X-rays | OAI and MOST | PCA | LASSO regression | 10-fold cross -validation | AUC of 0.86 for Radiographic progression |
| [ | 2018 | MRI | SKI10 | Not used | CNN | 3D-FSE images and T2 maps | ASD ± SD: 0.56 ± 0.12 (FB); 0.50 ± 0.14 (TB) |
| [ | 2019 | MRI | SKI10,OAI imorphics, OAI ZIB | Not used | 2D/3D CNN and combination of (SSMs) | 2-fold cross-validation | (i) 74.0 ± 7.7 total score. |
| [ | 2020 | MR | National Institutes of Health (NIH), SKI10 | Not used | HNN deep learning | 9-fold cross-validation | DSC ± SD: 0.972 ± 0.054 (FB); 0.947 ± 0.0113 (PB) |
| [ | 2019 | X-ray | Korea Centers for Disease Control and Prevention (KCDCP) | PCA | Deep Neural Network (DNN) | (66%) train (34%) test, | 76.8% AUC |
| [ | 2020 | X-ray | OAI,MOST | Not used | Ensemble and CNN | 19,704 train 11,743 test | 0.98 Average precision |
| [ | 2017 | X-ray | OAI,MOST | FCN | CNN | 30% testing 70% training | Accuracy 60.3% for (multi-class Grades 0–4) |
| [ | 2018 | X-ray | OAI,MOST | FCN | CNN ResNet-34 | 67% train, 11% validation, | Accuracy 66.71% (multi-class Grades 0–4) |
| [ | 2019 | Clinical data, | OAI,MOST | CNN | Gradient Boosting Machine (GBM) and Logistic Regression (LR) | MOST dataset for testing and OAI dataset for training, 5F-CV | Accuracy 0.79 |
| [ | 2019 | X-ray | OAI | Cascade | Deep Neural Network (DNN) | 10-fold cross | 82.98% Accuracy |