| Literature DB >> 32702902 |
Karol Borkowski1, Cristina Rossi, Alexander Ciritsis, Magda Marcon, Patryk Hejduk, Sonja Stieb, Andreas Boss, Nicole Berger.
Abstract
Marked enhancement of the fibroglandular tissue on contrast-enhanced breast magnetic resonance imaging (MRI) may affect lesion detection and classification and is suggested to be associated with higher risk of developing breast cancer. The background parenchymal enhancement (BPE) is qualitatively classified according to the BI-RADS atlas into the categories "minimal," "mild," "moderate," and "marked." The purpose of this study was to train a deep convolutional neural network (dCNN) for standardized and automatic classification of BPE categories.This IRB-approved retrospective study included 11,769 single MR images from 149 patients. The MR images were derived from the subtraction between the first post-contrast volume and the native T1-weighted images. A hierarchic approach was implemented relying on 2 dCNN models for detection of MR-slices imaging breast tissue and for BPE classification, respectively. Data annotation was performed by 2 board-certified radiologists. The consensus of the 2 radiologists was chosen as reference for BPE classification. The clinical performances of the single readers and of the dCNN were statistically compared using the quadratic Cohen's kappa.Slices depicting the breast were classified with training, validation, and real-world (test) accuracies of 98%, 96%, and 97%, respectively. Over the 4 classes, the BPE classification was reached with mean accuracies of 74% for training, 75% for the validation, and 75% for the real word dataset. As compared to the reference, the inter-reader reliabilities for the radiologists were 0.780 (reader 1) and 0.679 (reader 2). On the other hand, the reliability for the dCNN model was 0.815.Automatic classification of BPE can be performed with high accuracy and support the standardization of tissue classification in MRI.Entities:
Mesh:
Year: 2020 PMID: 32702902 PMCID: PMC7373599 DOI: 10.1097/MD.0000000000021243
Source DB: PubMed Journal: Medicine (Baltimore) ISSN: 0025-7974 Impact factor: 1.817
Figure 1The catalog structure corresponding to the breast detection model (left) and the BPE model (right).
Accuracy, precision, recall, and the F1-score of the breast detection model evaluated on the real-world data.
Accuracy, precision, recall, and the F1-score of the BPE (background parenchymal enhancement) class model evaluated on the real-world data.
Figure 2The loss function (bottom) and accuracy (top) plots for the training (red) and validation (blue) set depicting the learning process of the breast detection model (left) and BPE model (right).
Figure 3A confusion matrix for the validation of the breast detection model using the real-world dataset.
Figure 4A confusion matrix for the validation of the BPE class model using the real-world dataset.
Figure 5An exemplary input image with superimposed the class activation map. The dark red regions correspond to the area that highly contributed to the final model prediction.
Figure 6The values of the Cohen's kappa calculated for the predictions of the model, answer of both human readers and the consensus decision in each possible combination.
Statistics of the answers of both radiologists and the model.
Figure 7The values of the Cohen's kappa for both radiologists (blue and red) and the model (green) calculated for each BPE class separately. The results were with regard to the reference.