| Literature DB >> 35146431 |
Lukas Hirsch1, Yu Huang1, Shaojun Luo1, Carolina Rossi Saccarelli1, Roberto Lo Gullo1, Isaac Daimiel Naranjo1, Almir G V Bitencourt1, Natsuko Onishi1, Eun Sook Ko1, Doris Leithner1, Daly Avendano1, Sarah Eskreis-Winkler1, Mary Hughes1, Danny F Martinez1, Katja Pinker1, Krishna Juluru1, Amin E El-Rowmeim1, Pierre Elnajjar1, Elizabeth A Morris1, Hernan A Makse1, Lucas C Parra1, Elizabeth J Sutton1.
Abstract
PURPOSE: To develop a deep network architecture that would achieve fully automated radiologist-level segmentation of cancers at breast MRI.Entities:
Keywords: Breast; Convolutional Neural Network (CNN); Deep Learning Algorithms; MRI; Machine Learning Algorithms; Segmentation; Supervised Learning
Year: 2021 PMID: 35146431 PMCID: PMC8823456 DOI: 10.1148/ryai.200231
Source DB: PubMed Journal: Radiol Artif Intell ISSN: 2638-6100
Figure 1:Number of examinations (exams) and breasts used in training and testing. See also Table E1 (supplement). Post OP = postoperative procedure.
Figure 2:(A) Example of precontrast and first postcontrast fat-saturated images (T1 and T1c, respectively). Initial dynamic contrast enhancement (DCE) in this breast with malignant tumor is evident after subtracting the first T1-weighted contrast-enhanced image from the precontrast image (DCE-in). Subsequent washout (DCE-out) is evident in the subsequent drop in intensity, measured as slope over time. (B) Graph shows the range of in-plane resolutions of T1-weighted contrast-enhanced scans acquired between 2002 and 2014.
Figure 3:Deep convolutional neural network used for segmentation. A three-dimensional (3D) U-Net with a total of 16 convolutional layers (red arrows) resulting in 3D feature maps (blue blocks). The input MRI includes several modalities (Fig 2A). The network output is a prediction for a two-dimensional sagittal section, with probabilities for cancer for each voxel (green and red map). The full volume is processed in nonoverlapping image patches (green square on input MRI). A breast mask provides a spatial prior as input to the U-Net.
Figure 4:Manual and automated segmentations of breast cancer. (A) Inputs to the model consisting of the first postcontrast image (T1c), postcontrast minus precontrast image (T1) (DCE-in), and washout (DCE-out), with an independent reference for radiologist 4 (R4) made from the intersection of radiologists 1–3 (R1–R3 [Ref4]) and the network output (M Probs) indicating probability that a voxel is cancer (green = low; red = high). (B) Example segmentation from all four radiologists (R1–R4) for a given section, and the model segmentation created by thresholding probabilities (M). Dice scores for R4 and M were computed using Ref4 as the target. (C) Zooming in on the areas outlined in yellow in B, showing the boundaries of segmentations for the machine as well as human-generated segmentations as drawn on the screen by R1–R4.
Figure 5:Network (net) and radiologist (rad) performance on the test set of 250 malignant cases. (A) Distribution of Dice scores in 250 test cases averaged across four reference segmentations. (B) Difference in Dice score between the network and each radiologist (Δ Dice) for each of the four reference (ref) segmentations (ref1, ref2, ref3, and ref4). The median Dice value was higher for the network for ref1 and ref3 (red median Δ Dice) and higher for the radiologist for ref2 and ref4 (blue median Δ Dice). Box plots show median (orange, red, or blue lines), quartiles (box), and 1.5 interquartile range (whiskers). *P < .001 (Wilcoxon signed rank test).
Figure 6:Examples of cases in which the network deviated from the segmentation of the reference radiologist (ref). (A) The network captured additional areas not selected by radiologist 4 (R4). Dice score shown for Ref4 (intersection of R1–R3). (B) The network output (M Probs) captured the correct area, but low probability values yielded a smaller region compared with the consensus segmentation (Ref2) after thresholding.