| Literature DB >> 34977096 |
Liang Chen1, Ya Shen1, Xiao Huang1, Hua Li2, Jian Li1, Ruili Wei1, Weihua Yang3.
Abstract
Aim: The purpose of this work was to develop and evaluate magnetic resonance imaging (MRI)-based radiomics for differentiation of orbital cavernous hemangioma (OCH) and orbital schwannoma (OSC).Entities:
Keywords: cavernous hemangioma; machine learning; orbit; radiomics; schwannoma
Year: 2021 PMID: 34977096 PMCID: PMC8716692 DOI: 10.3389/fmed.2021.795038
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
Figure 1The recruitment of patients and design of this study. OCH, orbital cavernous hemangioma; OSC, orbital schwannoma.
Figure 2Example images of OCH and OSC. Top left is a T1-weighted image of a patient with OCH, top right is a T2-weighted image of the same patient. Bottom left is a T1-weighted image of a patient with OSC, bottom right is a T2-weighted image of the same patient.
Patient demographic information.
|
|
| ||
|---|---|---|---|
| Gender | |||
| Female | 24 (60%) | 8 (44%) | |
| Male | 16 (40%) | 10 (56%) | 0.41 |
| Age | 49 (40–57) | 47 (41–50) | 0.62 |
| Involvement | |||
| Left | 20 (50%) | 8 (44%) | |
| Right | 20 (50%) | 10 (56%) | 0.91 |
| Intraconal | 39 | 18 | |
| Extraconal | 1 | 0 | |
| Tumor | |||
| Size(mm3) | 4776.07 | 7602.30 | 0.03 |
| Sphericity | 0.70 (0.69–0.74) | 0.66 (0.64–0.69) | 0.11 |
| T1WI MEAN SIGNAL INTENSITY | 573.43 (358.29–765.61) | 711.15 (400.96–917.29) | 0.30 |
| T2WI mean signal intensity | 662.62 (574.92–769.09) | 792.05 (608.97–971.64) | 0.15 |
Tumor size was calculated based on the ROI. Tumor sphericity took values between 0 and 1, the higher the value (closer to 1), the closer to sphere the tumor. All values of tumor size, sphericity and signal intensity were from the features extracted.
Figure 3Diagram of the feature selection process. Images (A,E) are correlation heat maps demonstrating correlations between features. The deeper the color, the stronger the correlation. Red indicates negative correlation and blue positive correlation. Image (B) shows the results of the LASSO regression. With increasing penalty parameter the coefficients of the features approach zero and finally converge on an optimal solution. Image (C) shows the results of LASSO-CV. The right dotted line indicates the standard error of the minimum mean square error (the left dotted line) and corresponds to the feature number on the top coordinate. Image (D) shows the RFE-CV, which provided the highest score with the remaining 11 features. Images (F–H) show the results of the PCA. Ten recombined dimensions are shown, with the first five explaining 90% of the original data [image (F)]. Image g shows the feature distributions of the first five dimensions. We finally selected the features whose contribution was higher than average (the dotted line) on Dimension 1 and Dimension 2 [image (H)].
The features selected for model construction.
|
|
|
|---|---|
| T1 | Image_original_Maximum |
|
| |
|
| |
| glcm_JointEnergy | |
| gldm_DependenceEntropy | |
| glszm_SmallAreaLowGrayLevelEmphasis | |
| T2 |
|
|
| |
| glcm_Imc1 | |
| glcm_MCC | |
| gldm_DependenceEntropy | |
| T1 + T2 |
|
|
| |
| shape_Maximum2DdiameterColumn | |
| shape_SurfaceVolumeRatio | |
| glcm_DifferenceVariance | |
| glcm_MCC | |
| glcm_Imc2 | |
| glcm_SumSquares |
Bold font indicates the universal features used for all three sequences. glcm, gray level co-occurrence matrix; gldm, gray level dependence matrix; glszm, gray level size zone matrix; Imc, informational measure of correlation; MCC, maximal correlation coefficient.
The ACC and AUC of each model.
|
|
|
|
|
|---|---|---|---|
| T1 | LR | 77% (67%~83%) | 91% (84%~97%) |
| SVM | 86% (82%~95%) | 93% (89%~98%) | |
| DT | 85% (82%~92%) | 97% (95%~99%) | |
| RF | 83% (75%~92%) | 96% (94%~98%) | |
| T2 |
|
|
|
|
|
|
| |
| DT | 89% (83%~92%) | 97% (96%~99%) | |
| RF | 89% (83%~100%) | 97% (94%~100%) | |
| T1 + T2 | LR | 88% (84%~91%) | 85% (84%~86%) |
|
|
|
| |
| DT | 83% (79%~87%) | 96% (95%~97%) | |
| RF | 88% (83%~92%) | 97% (95%~98%) |
Models with an ACC over 90% are marked in bold font. ACC, accuracy; AUC, area under the curve; LR, logistic regression; SVM, support vector machine; DT, decision tree; RF, random forest.
Figure 4Results of the logistics regression model applied to T2WI. The feature data were standardized with zero-mean normalization. This nomogram is mainly used to predict the odds of OSC rather than OCH according to the three features. The bottom images show the AUCs of the logistic regression model for the training set (left) and validation set (right).