| Literature DB >> 35184757 |
Aimilia Gastounioti1,2, Shyam Desai1, Vinayak S Ahluwalia1,3, Emily F Conant4, Despina Kontos5.
Abstract
BACKGROUND: Improved breast cancer risk assessment models are needed to enable personalized screening strategies that achieve better harm-to-benefit ratio based on earlier detection and better breast cancer outcomes than existing screening guidelines. Computational mammographic phenotypes have demonstrated a promising role in breast cancer risk prediction. With the recent exponential growth of computational efficiency, the artificial intelligence (AI) revolution, driven by the introduction of deep learning, has expanded the utility of imaging in predictive models. Consequently, AI-based imaging-derived data has led to some of the most promising tools for precision breast cancer screening. MAIN BODY: This review aims to synthesize the current state-of-the-art applications of AI in mammographic phenotyping of breast cancer risk. We discuss the fundamentals of AI and explore the computing advancements that have made AI-based image analysis essential in refining breast cancer risk assessment. Specifically, we discuss the use of data derived from digital mammography as well as digital breast tomosynthesis. Different aspects of breast cancer risk assessment are targeted including (a) robust and reproducible evaluations of breast density, a well-established breast cancer risk factor, (b) assessment of a woman's inherent breast cancer risk, and (c) identification of women who are likely to be diagnosed with breast cancers after a negative or routine screen due to masking or the rapid and aggressive growth of a tumor. Lastly, we discuss AI challenges unique to the computational analysis of mammographic imaging as well as future directions for this promising research field.Entities:
Keywords: Artificial intelligence; Breast cancer; Breast cancer risk; Breast density; Breast tomosynthesis; Deep learning; Digital mammography; Mammographic density; Mammographic imaging
Mesh:
Year: 2022 PMID: 35184757 PMCID: PMC8859891 DOI: 10.1186/s13058-022-01509-z
Source DB: PubMed Journal: Breast Cancer Res ISSN: 1465-5411 Impact factor: 8.408
Fig. 1Diagram explaining the relationship between the different techniques in the field of artificial intelligence
Representative studies in AI-enabled breast density evaluation from mammographic images
| Study | Model development dataset | Model design | Model performance | ||||
|---|---|---|---|---|---|---|---|
| Image format | # images (# women) | Vendors (# sites) | Model architecture | Output density measure | Density maps | ||
| Roth et al. [ | FFDM (Processed) | 109,849 images (N/R) | N/R (7 sites) | DenseNet-121 | BI-RADS density | No | Four-class |
| Dontchos et al. [ | FFDM (Processed) | N/R (2174 women) | Hologic (1 site) | ResNet-18 | BI-RADS density (13 interpreting radiologists) | No | Dense versus non-dense Acc: 94.9% (academic radiologists) 90.7% (community radiologists) |
| Matthews et al. [ | FFDM (Processed) and SM | Hologic (2 sites) | ResNet-34 | BI-RADS density (11 interpreting radiologists) | No | Four-class Four-class Four-class | |
| Saffari et al. [ | FFDM | 410 images (115 women) | Siemens (1 site) | cGAN, CNN | BI-RADS density | Yes | DSC = 98% in dense tissue segmentation |
| Deng et al. [ | FFDM | 18,157 images (women) | Hologic (1 site) | SE-Attention CNN | BI-RADS density | No | Acc = 92.17% |
| Perez Benito et al. [ | FFDM (Processed) | 6680 images (1785 women) | Fujifilm, Hologic, Siemens, GE, IMS (11 sites) | ECNN | BI-RADS density (2 interpreting radiologists) | Yes | DSC = 0.77 |
| Chang et al. [ | FFDM (Raw) | 108,230 images (21,759 women) | GE, Kodak, Fischer (33 sites) | ResNet-50 | BI-RADS density (92 interpreting radiologists) | No | Four-class |
| Ciritsis et al. [ | FFDM | 20,578 images (5221 women) | N/R (1 site) | CNN | BI-RADS density (consensus of 2 interpreting radiologists) | No | AUC = 0.98 for MLO views AUC = 0.97 for CC views |
| Lehman et al. [ | FFDM (Processed) | 58,894 images (39,272 women) | Hologic (1 site) | ResNet-18* | BI-RADS density (12 interpreting radiologists) | No | Four-class |
| Mohamed et al. [ | FFDM (Processed) | 22,000 images (1427 women) | Hologic (1 site) | CNN AlexNet | BI-RADS density | No | AUC = 0.94 |
| Mohamed et al. [ | FFDM (Processed) | 15,415 images (963 women) | Hologic (1 site) | CNN AlexNet | BI-RADS density | No | AUC = 0.95 for MLO views AUC = 0.88 for CC views |
| Haji Maghsoudi et al. [ | FFDM (Raw) | 15,661 images (4437 women) | Hologic (2 Sites) | U-net* | APD% | Yes | DSC = 92.5% in breast segmentation APDdiff = 4.2–4.9% |
| Li et al. [ | FFDM (Raw) | 661 images (444 women) | GE (1 site) | CNN | APD% | Yes | DSC = 76% in dense tissue segmentation |
| Kallenberg et al. [ | FFDM (Raw) | N/R (493 women) | Hologic (1 site) | CSAE | APD% | Yes | DSC = 63% in dense tissue segmentation |
The table describes the development image dataset used in each study, including format of mammographic images, sample size, and vendors, as well as methodological details for the AI model (output breast density measure, model architecture and availability of spatial density maps) and the model performance in breast density evaluation
FFDM: full-field digital mammography, SM 2D synthetic mammographic image acquired with digital breast tomosynthesis, APD% area percent density, MLO medio-lateral oblique, CC cranio-caudal, cGAN conditional generative adversarial network, CNN convolutional neural network, ECNN entirely convolutional neural network, CSAE convolutional sparse auto encoder, DSC dice score, APD difference in APD%, K Cohen kappa coefficient, AUC area under the ROC curve, Acc accuracy
*Indicates publicly available AI model. N/R not explicitly reported in the paper
Fig. 2AI-based BI-RADS density classification. A A visual display of the range of BI-RADS density classifications for AI models trained with different architectures and training parameters for 50 patients in the testing set. The radiologist interpretation is displayed in the first row. The average breast density rating across all models and radiologist interpretations is displayed in the last row and was used to order the patients from least dense (left) to most dense (right). B The distribution of predicted breast density labels in the testing set differed for experiments with random class sampling (left) compared with equal class sampling (right) at each minibatch. ****P < .001. E. dense = extremely dense; H. dense = heterogeneously dense [30]. [Reprinted with permission from Elsevier (License Number: 5138920035119)]
Fig. 3Example of AI-enabled density segmentation map from FFDM (estimated breast percent density, PD = 47%)
Representative studies in AI-enabled direct breast risk assessment from mammographic images
| Study | Image format | Time from exam to breast cancer diagnosis | # images (# women) | Vendors (# sites) | Model architecture | Model performance |
|---|---|---|---|---|---|---|
| Yala et al. [ | FFDM (processed) | 1–5 years | 295,002 images (91,520 women) | Hologic (3 sites) | ResNet-18* | AUC = 0.84, 1-year risk AUC = 0.76, 5-year risk |
| Dembrower et al. [ | FFDM (processed) | 3.6 ± 2.2 years | 150,502 images (1188 cases; 10,563 controls) | Hologic (N/R) | Inception-ResNet* | OR = 1.55 ORadj = 1.56 AUC = 0.65 |
| Arefan et al. [ | FFDM (processed) | 1–4 years | 452 images (113 cases; 113 controls) | Hologic (1 site) | GoogleLeNet | AUC = 0.68, CC AUC = 0.60, MLO |
| Yala et al. [ | FFDM (processed) | 1–5 years | 88,994 images (1821 cases; 38,284 controls) | Hologic (1 site) | ResNet-18* | AUC = 0.68 for image only DL AUC = 0.70 for hybrid DL + risk factors |
| Ha et al. [ | FFDM (processed) | 2–5.3 years | N/R (210 cases; 527 controls) | GE (1 site) | CNN | OR = 4.42 Acc = 72% |
| Lotter et al. [ | FFDM (processed) DBT (MSP) | 1–2 years | N/R (> 1000 cases; 62 K controls) | GE, Hologic (7 databases/sites) | RetinaNet* | AUC = 0.75–0.76 |
| Eriksson et al. [ | FFDM (processed) | 3 months–2 years | N/R (974 cases, 9376 controls) | GE, Philips, Sectra, Hologic, Siemens (4 sites) | CNN** | HR = 7.9 AUC = 0.73 |
| McKinney et al. [ | FFDM (processed) | 0 months–3.25 years | N/R (> 105 k women) | Hologic, GE, Siemens (4 sites) | RetinaNet MobileNetV2 ResNet-v2-50 ResNet-v1-50 | AUC = 0.76–0.89 |
The table describes the development image dataset used in each study, including format of mammographic images, time window from mammographic exam to breast cancer diagnosis, sample size, and vendors, as well as model architecture and performance in breast cancer risk assessment
FFDM full-field digital mammography, CNN convolutional neural network, AUC area under the ROC curve, Acc accuracy, OR odds ratio, HR hazard ratio
*Indicates publicly available AI model. **Indicates commercial model. N/R not explicitly reported in the paper
Fig. 4Use of the four standard mammographic views in long-term risk assessment via artificial intelligence [46]. [Reprinted with permission from The American Association for the Advancement of Science (License Number: 5138920821187)]