| Literature DB >> 36050434 |
Isaac Shiri1, Shayan Mostafaei2, Atlas Haddadi Avval3, Yazdan Salimi1, Amirhossein Sanaat1, Azadeh Akhavanallaf1, Hossein Arabi1, Arman Rahmim4,5, Habib Zaidi6,7,8,9.
Abstract
We aimed to construct a prediction model based on computed tomography (CT) radiomics features to classify COVID-19 patients into severe-, moderate-, mild-, and non-pneumonic. A total of 1110 patients were studied from a publicly available dataset with 4-class severity scoring performed by a radiologist (based on CT images and clinical features). The entire lungs were segmented and followed by resizing, bin discretization and radiomic features extraction. We utilized two feature selection algorithms, namely bagging random forest (BRF) and multivariate adaptive regression splines (MARS), each coupled to a classifier, namely multinomial logistic regression (MLR), to construct multiclass classification models. The dataset was divided into 50% (555 samples), 20% (223 samples), and 30% (332 samples) for training, validation, and untouched test datasets, respectively. Subsequently, nested cross-validation was performed on train/validation to select the features and tune the models. All predictive power indices were reported based on the testing set. The performance of multi-class models was assessed using precision, recall, F1-score, and accuracy based on the 4 × 4 confusion matrices. In addition, the areas under the receiver operating characteristic curves (AUCs) for multi-class classifications were calculated and compared for both models. Using BRF, 23 radiomic features were selected, 11 from first-order, 9 from GLCM, 1 GLRLM, 1 from GLDM, and 1 from shape. Ten features were selected using the MARS algorithm, namely 3 from first-order, 1 from GLDM, 1 from GLRLM, 1 from GLSZM, 1 from shape, and 3 from GLCM features. The mean absolute deviation, skewness, and variance from first-order and flatness from shape, and cluster prominence from GLCM features and Gray Level Non Uniformity Normalize from GLRLM were selected by both BRF and MARS algorithms. All selected features by BRF or MARS were significantly associated with four-class outcomes as assessed within MLR (All p values < 0.05). BRF + MLR and MARS + MLR resulted in pseudo-R2 prediction performances of 0.305 and 0.253, respectively. Meanwhile, there was a significant difference between the feature selection models when using a likelihood ratio test (p value = 0.046). Based on confusion matrices for BRF + MLR and MARS + MLR algorithms, the precision was 0.856 and 0.728, the recall was 0.852 and 0.722, whereas the accuracy was 0.921 and 0.861, respectively. AUCs (95% CI) for multi-class classification were 0.846 (0.805-0.887) and 0.807 (0.752-0.861) for BRF + MLR and MARS + MLR algorithms, respectively. Our models based on the utilization of radiomic features, coupled with machine learning were able to accurately classify patients according to the severity of pneumonia, thus highlighting the potential of this emerging paradigm in the prognostication and management of COVID-19 patients.Entities:
Mesh:
Year: 2022 PMID: 36050434 PMCID: PMC9437017 DOI: 10.1038/s41598-022-18994-z
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Different steps of the current study, including data acquisition, image segmentation using COLI-Net, image preprocessing and feature extraction, machine learning and evaluation method and metrics. GGO: ground glass opacities, T: Temperature, RR: Respiratory Rate, SpO2: Peripheral Capillary Oxygen Saturation, PaO2: Partial Pressure of Oxygen. FiO2 = Fraction of Inspired Oxygen.
Figure 2Examples of patient CT images belonging to different classes with different scores.
Selected features by Bagging Random Forests (“VSURF” R package) and multivariate adaptive regression splines (“earth” R package) for multi-class classification using nested fivefold cross validation based on the training set (50% of the samples, N = 555) and the validation set (20% of the samples, N = 223).
| Algorithm | Selected variables | Feature type | Relative importance value (%) |
|---|---|---|---|
| Bagging Random Forests | First Order | Mean Absolute Deviation | 80 |
| First Order | Robust Mean Absolute Deviation | 72 | |
| First Order | Variance | 70 | |
| First Order | Interquartile Range | 68 | |
| First Order | Kurtosis | 62 | |
| First Order | Skewness | 61 | |
| First Order | Entropy | 42 | |
| First Order | 10Percentile | 40 | |
| First Order | 90Percentile | 36 | |
| First Order | Energy | 30 | |
| First Order | Mean | 20 | |
| GLCM | Correlation | 100 | |
| GLCM | Cluster Tendency | 88 | |
| GLCM | Sum Squares | 66 | |
| GLCM | Inverse Variance | 60 | |
| GLCM | Cluster Shade | 55 | |
| GLCM | Cluster Prominence | 54 | |
| GLCM | Joint Entropy | 52 | |
| GLCM | Idm | 48 | |
| GLCM | Id | 44 | |
| GLDM | Dependence Variance | 65 | |
| GLRLM | Gray Level Non Uniformity Normalize | 51 | |
| Shape | Flatness | 18 | |
| Multivariate Adaptive Regression Splines | First Order | Mean Absolute Deviation | 100 |
| First Order | Skewness | 55 | |
| First Order | Variance | 11 | |
| GLCM | Correlation | 54 | |
| GLCM | Cluster Prominence | 47 | |
| GLCM | Difference Entropy | 36 | |
| GLDM | Gray Level Variance | 53 | |
| GLRLM | Gray Level Non Uniformity Normalize | 10 | |
| GLSZM | Zone Entropy | 20 | |
| Shape | Flatness | 48 |
Relative importance value calculated using generalized cross-validation (GCV) criterion with normalization.
Figure 3Examples of selected features (10Precentile from first order, Gray level Non-Uniformity Normalized from GLRLM, Idm from GLCM and Zone Entropy from GLSZM) in different class cases and different slices.
Multinomial logistic regression for the selected features by “mnlogit” R package and the model’s fitness indices based on the testing set (N = 332).
| Algorithm | Feature type | Adj. | Pseudo R[ | AIC | |
|---|---|---|---|---|---|
| Bagging Random Forests | First Order | Mean Absolute Deviation | < 0.001 | 0.305 | 782.6 |
| First Order | Robust Mean Absolute Deviation | < 0.001 | |||
| First Order | Variance | < 0.001 | |||
| First Order | Interquartile Range | < 0.001 | |||
| First Order | Kurtosis | < 0.001 | |||
| First Order | Skewness | < 0.001 | |||
| First Order | Entropy | 0.001 | |||
| First Order | 10Percentile | 0.002 | |||
| First Order | 90Percentile | 0.001 | |||
| First Order | Energy | 0.005 | |||
| First Order | Mean | 0.025 | |||
| GLCM | Correlation | < 0.001 | |||
| GLCM | Cluster Tendency | < 0.001 | |||
| GLCM | Sum Squares | < 0.001 | |||
| GLCM | Inverse Variance | < 0.001 | |||
| GLCM | Cluster Shade | < 0.001 | |||
| GLCM | Cluster Prominence | < 0.001 | |||
| GLCM | Joint Entropy | < 0.001 | |||
| GLCM | Id | 0.001 | |||
| GLCM | Idm | 0.001 | |||
| GLDM | Dependence Variance | < 0.001 | |||
| GLRLM | Gray Level Non-Uniformity Normalize | 0.009 | |||
| Shape | Flatness | < 0.001 | |||
| Multivariate Adaptive Regression Splines | First Order | Mean Absolute Deviation | < 0.001 | 0.253 | 972.8 |
| First Order | Skewness | < 0.001 | |||
| First Order | Variance | < 0.001 | |||
| GLCM | Cluster Prominence | < 0.001 | |||
| GLCM | Correlation | < 0.001 | |||
| GLCM | Difference Entropy | < 0.001 | |||
| GLDM | Gray Level Variance | < 0.001 | |||
| GLRLM | Gray Level Non-Uniformity Normalize | < 0.001 | |||
| GLSZM | Zone Entropy | < 0.001 | |||
| Shape | Flatness | < 0.001 |
p value by Wald chi-square test, Adj. p value: P value adjusted by Benjamini and Hochberg method, statistical comparison between two models showed non-significant difference by Likelihood Ratio Test: P value = 0.046, AIC: Akaike information criterion.
The classification power indices (SD) based on the testing set (N = 332) with 1000 bootstrapping samples based on the feature selection methods.
| Algorithm | Class | Precision | Recall | F1-score | Accuracy | AUC (95% CI) |
|---|---|---|---|---|---|---|
| Bagging Random Forests | Class 1 | 0.881 (0.098) | 0.855 (0.085) | 0.868 (0.079) | 0.918 (0.109) | 0.846 (0.805–0.887) |
| Class 2 | 0.800 (0.039) | 0.828 (0.037) | 0.812 (0.019) | 0.852 (0.049) | ||
| Class 3 | 0.864 (0.105) | 0.843 (0.079) | 0.853 (0.096) | 0.928 (0.117) | ||
| Class 4 | 0.882 (0.103) | 0.882 (0.088) | 0.882 (0.109) | 0.988 (0.119) | ||
| Average/total | 0.856 | 0.852 | 0.854 | 0.921 | ||
| Multivariate Adaptive Regression Splines | Class 1 | 0.731 (0.099) | 0.760 (0.101) | 0.745 (0.089) | 0.837 (0.116) | 0.807 (0.752–0.861) |
| Class 2 | 0.671 (0.039) | 0.688 (0.033) | 0.679 (0.026) | 0.750 (0.031) | ||
| Class 3 | 0.802 (0.119) | 0.734 (0.101) | 0.767 (0.098) | 0.888 (0.121) | ||
| Class 4 | 0.706 (0.109) | 0.706 (0.109) | 0.706 (0.109) | 0.970 (0.136) | ||
| Average/total | 0.728 | 0.722 | 0.724 | 0.861 |
Figure 4Four-by-four confusion matrix for (a) Multivariate Adaptive Regression Splines (MARS) and Bagging Random Forests (BRF).
Figure 5(a) ROC curve for assessing power of multi-class classification of the selected features in Bagging Random Forests (AUC = 0.846), and (b) Multivariate Adaptive Regression Splines (AUC = 0.807). Statistical comparison of ROC curves by “pROC” R package indicated significant difference (Z = 3.834, p value < 0.001).