| Literature DB >> 34234550 |
Qin Li1,2, Qin Xiao1,2, Jianwei Li2,3, Zhe Wang4,5, He Wang4,5, Yajia Gu1,2.
Abstract
BACKGROUND: To assess the value of radiomics based on multiphases contrast-enhanced magnetic resonance imaging (CE-MRI) for early prediction of pathological complete response (pCR) to neoadjuvant therapy (NAT) in patients with human epithelial growth factor receptor 2 (HER2) positive invasive breast cancer.Entities:
Keywords: breast cancer; machine learning; magnetic resonance imaging; neoadjuvant therapy; radiomics
Year: 2021 PMID: 34234550 PMCID: PMC8253937 DOI: 10.2147/CMAR.S304547
Source DB: PubMed Journal: Cancer Manag Res ISSN: 1179-1322 Impact factor: 3.989
Figure 1Flow chart of patient recruitment in this study.
Figure 2The source images of radiomics features.
Clinical and Morphology Characteristics Between pCR and Non-pCR Group
| Characteristics | pCR (n=54) | Non-pCR (n=73) | |
|---|---|---|---|
| Age,mean(SD),Y | 52.80±9.24 | 50.62±11.19 | 0.309 |
| Menonpausal status | 0.278 | ||
| Premenopaual | 15(27.78%) | 26(35.61%) | |
| Postmenopaal | 39(72.22%) | 47(64.39%) | |
| Enhancement pattern | 0.299 | ||
| Mass | 36(66.67%) | 53(72.60%) | |
| Non-mass | 18(33.33%) | 20(27.40%) | |
| Multifocal or multicenter | 0.242 | ||
| Present | 42(77.78%) | 53(72.60%) | |
| Absent | 12(22.22%) | 20(27.40%) | |
| Pre-NAC T stage | 0.468 | ||
| T2 | 40(74.07%) | 53(72.60%) | |
| T3 | 14(25.93%) | 20(27.40%) | |
| Pre-NAC N stage | 0.211 | ||
| N0 | 16(29.63%) | 23(31.50%) | |
| N1 | 29(53.70%) | 38(52.05%) | |
| N2 | 5(9.26%) | 2(2.74%) | |
| N3 | 4(7.41%) | 10(13.71%) |
Abbreviations: NAC, neoadjuvant chemotherapy; PCR, pathological complete response; SD, standard deviation.
The Accuracy, Sensitivity, Specificity and AUC of Machine Learning Classification Based on Optimal Six Features from CE1
| Classifier Model | Accuracy | Sensitivity | Specificity | AUC |
|---|---|---|---|---|
| Fine Tree | 59.1% | 48% | 67% | 0.61 |
| Medium Tree | 59.1% | 48% | 67% | 0.62 |
| Coarse tree | 60.6% | 31% | 82% | 0.61 |
| Linear Discriminant | 68.5% | 52% | 81% | 0.69 |
| Quadratic Discriminant | 66.1% | 43% | 84% | 0.66 |
| Logistic Regression | 68.5% | 52% | 81% | 0.69 |
| Linear SVM | 65.4% | 41% | 84% | 0.69 |
| Quadratic SVM | 59.1% | 33% | 78% | 0.57 |
| Cubic SVM | 49.6% | 48% | 51% | 0.49 |
| Fine Gaussian SVM | 59.8% | 44% | 71% | 0.64 |
| Medium Gaussian SVM | 62.2% | 35% | 82% | 0.70 |
| Coarse Gaussian SVM | 66.1% | 29% | 95% | 0.68 |
| Fine KNN | 60.6% | 54% | 66% | 0.60 |
| Medium KNN | 59.8% | 44% | 71% | 0.67 |
| Coarse KNN | 57.5% | 0% | 100% | 0.64 |
| Cosine KNN | 63.8% | 59% | 67% | 0.62 |
| Cubic KNN | 59.8% | 44% | 71% | 0.67 |
| Weighted KNN | 60.6% | 52% | 67% | 0.64 |
| Boosted Trees | 61.4% | 54% | 67% | 0.67 |
| Bagged Trees | 59.1% | 52% | 64% | 0.65 |
| Subspace Discriminant | 68.5% | 52% | 81% | 0.69 |
| Subspace KNN | 60.6% | 54% | 66% | 0.60 |
| RUS Boosted trees | 63.0% | 52% | 71% | 0.75 |
Abbreviations: KNN, k nearest neighbor; SVM, support vector machine; RUS, random undersampling; AUC, area under curve; CE, contrast enhancement.
The Accuracy, Sensitivity, Specificity and AUC of Machine Learning Classification Based on Optimal Six Features from CEm
| Classifier Model | Accuracy | Sensitivity | Specificity | AUC |
|---|---|---|---|---|
| Fine Tree | 69.3% | 61% | 75% | 0.74 |
| Medium Tree | 69.3% | 61% | 75% | 0.74 |
| Coarse tree | 77.2% | 72% | 81% | 0.75 |
| Linear Discriminant | 79.5% | 74% | 84% | 0.84 |
| Quadratic Discriminant | 78.7% | 72% | 84% | 0.84 |
| Logistic Regression | 79.5% | 74% | 84% | 0.84 |
| Linear SVM | 79.5% | 74% | 84% | 0.84 |
| Quadratic SVM | 70.1% | 61% | 77% | 0.74 |
| Cubic SVM | 40.9% | 20% | 56% | 0.30 |
| Fine Gaussian SVM | 74.8% | 69% | 79% | 0.77 |
| Medium Gaussian SVM | 78.7% | 70% | 85% | 0.79 |
| Coarse Gaussian SVM | 78.7% | 72% | 84% | 0.85 |
| Fine KNN | 67.7% | 61% | 73% | 0.67 |
| Medium KNN | 74.0% | 59% | 85% | 0.81 |
| Coarse KNN | 57.5% | 0% | 100% | 0.78 |
| Cosine KNN | 78.7% | 78% | 79% | 0.76 |
| Cubic KNN | 74.0% | 59% | 85% | 0.81 |
| Weighted KNN | 66.9% | 57% | 74% | 0.68 |
| Boosted Trees | 67.7% | 61% | 73% | 0.76 |
| Bagged Trees | 67.7% | 61% | 73% | 0.76 |
| Subspace Discriminant | 79.5% | 74% | 84% | 0.84 |
| Subspace KNN | 67.7% | 61% | 73% | 0.67 |
| RUS Boosted trees | 68.5% | 65% | 71% | 0.75 |
Abbreviations: KNN, k nearest neighbor; SVM, support vector machine; RUS, random undersampling; AUC, area under curve; CE, contrast enhancement.
Figure 3ROC of linear SVM predicting pCR and non-pCR.
Figure 4ROC of logistic regression predicting pCR and non-pCR.
Figure 5Rad-score box plot for pCR classification based on the CE1 and CEm.