| Literature DB >> 33875840 |
Xuejun Qian1,2, Jing Pei3,4, Hui Zheng5, Xinxin Xie5, Lin Yan6, Hao Zhang7, Chunguang Han3,4, Xiang Gao8, Hanqi Zhang5, Weiwei Zheng9, Qiang Sun3,4, Lu Lu8, K Kirk Shung10.
Abstract
The clinical application of breast ultrasound for the assessment of cancer risk and of deep learning for the classification of breast-ultrasound images has been hindered by inter-grader variability and high false positive rates and by deep-learning models that do not follow Breast Imaging Reporting and Data System (BI-RADS) standards, lack explainability features and have not been tested prospectively. Here, we show that an explainable deep-learning system trained on 10,815 multimodal breast-ultrasound images of 721 biopsy-confirmed lesions from 634 patients across two hospitals and prospectively tested on 912 additional images of 152 lesions from 141 patients predicts BI-RADS scores for breast cancer as accurately as experienced radiologists, with areas under the receiver operating curve of 0.922 (95% confidence interval (CI) = 0.868-0.959) for bimodal images and 0.955 (95% CI = 0.909-0.982) for multimodal images. Multimodal multiview breast-ultrasound images augmented with heatmaps for malignancy risk predicted via deep learning may facilitate the adoption of ultrasound imaging in screening mammography workflows.Entities:
Year: 2021 PMID: 33875840 DOI: 10.1038/s41551-021-00711-2
Source DB: PubMed Journal: Nat Biomed Eng ISSN: 2157-846X Impact factor: 25.671