| Literature DB >> 36237466 |
Abstract
Mammography is the primary imaging modality for breast cancer detection; however, a high level of expertise is needed for its interpretation. To overcome this difficulty, artificial intelligence (AI) algorithms for breast cancer detection have recently been investigated. In this review, we describe the characteristics of AI algorithms compared to conventional computer-aided diagnosis software and share our thoughts on the best methods to develop and validate the algorithms. Additionally, several AI algorithms have introduced for triaging screening mammograms, breast density assessment, and prediction of breast cancer risk have been introduced. Finally, we emphasize the need for interest and guidance from radiologists regarding AI research in mammography, considering the possibility that AI will be introduced shortly into clinical practice. CopyrightsEntities:
Year: 2021 PMID: 36237466 PMCID: PMC9432399 DOI: 10.3348/jksr.2020.0205
Source DB: PubMed Journal: Taehan Yongsang Uihakhoe Chi ISSN: 1738-2637
Fig. 1Development of deep learning models. The first step is to collect a large-scale dataset, followed by dataset annotation by radiologists, to improve the utility of the dataset. Deep learning models are trained using the collected data and annotations.
BI-RADS = Breast Imaging Reporting and Data System, IDC = invasive ductal carcinoma, ResNet-34 w/BIN = RestNet-34 with Batch Instance Normalization
Fig. 2Process for clinical validation of deep learning algorithms.
RCT = randomized control study
Fig. 3Visualization methods of deep learning models. The suspected region is highlighted using a color-map (left) or contour lines (right).
Summary of Results for AI Applications in Digital Mammography
| References | Comparison | Cases | N* | Sensitivity (%) | Specificity (%) | ROC AUC | |||
|---|---|---|---|---|---|---|---|---|---|
| Test | Control | Test | Control | Test | Control | ||||
| Rodríguez-Ruiz et al. (2019) ( | Reader study: Reader + AI (test) vs. Reader (control) | Cancer 100 | 14 | 86 | 83 | 79 | 77 | 0.89 | 0.87 |
| Non-cancer 140 | |||||||||
| Wu et al. (2020) ( | Reader study: Reader + AI (test) vs. Reader (control) | Cancer 62 | 14 | - | - | - | - | 0.891 | 0.876 |
| Non-cancer 658 | |||||||||
| McKinney et al. (2020) ( | Historical comparison AI (test) vs. Original report (control) | Total = 25856, cancer = 414 (UK) | 65 (UK) | 63 (UK) | 94 (UK) | 93 (UK) | 0.889 (UK) | - | |
| Total = 3097, cancer = 686 (US) | 56 (US) | 48 (US) | 87 (US) | 80 (US) | 0.811 (US) | ||||
| Reader study: AI (test) vs. Reader (control) | Cancer 113 | 6 | - | - | - | - | 0.740 | 0.625 | |
| Non-cancer 352 | |||||||||
| Kim et al. (2020) ( | Reader study: Reader + AI (test) vs. Reader (control) | Cancer 160 | 14 | 75 | 85 | 72 | 75 | 0.881 | 0.81 |
| Non-cancer 160 | |||||||||
| Salim et al. (2020) ( | Historical comparison AI (test) vs. Original report (control) | Cancer 739 | 82 (AI-1)† | 77 | 96.6 (AI-1) | 97 | - | - | |
| Non-cancer 8066 | 67 (AI-2) | 96.6 (AI-2) | |||||||
| 67 (AI-3) | 96.6 (AI-3) | ||||||||
*Number of readers.
†Three AI algorithms (AI-1, AI-2, AI-3) were tested.
AI = artificial intelligence, AUC = area under the curve, ROC = receiver operation characteristic curve
Fig. 4Diagram illustrates potential scenarios for triaging mammograms in breast cancer screening. In the standard scenario, radiologists read all mammograms.
A. In a rule-out scenario, radiologists only read mammograms above a rule-out threshold.
B. In a double reading scenario, mammograms below a certain threshold are read by one radiologist and mammograms above the threshold are interpreted by two radiologists.
C. In the rule-in scenario, mammograms are triaged into an enhanced assessment when the score is above a rule-in threshold (after negative double reading by radiologists).
AI = artificial intelligence, MG = mammography