Cuixia Liang1,2, Mingqiang Li1,2, Zhaoying Bian1,2, Wenbing Lv1,2, Dong Zeng1,2, Jianhua Ma1,2. 1. Department of Biomedical Engineering, Southern Medical University, Guangzhou 510515, China. 2. Guangzhou Key Laboratory of Medical Radiation Imaging and Detection Technology, Southern Medical University, Guangzhou 510515, China.
Abstract
OBJECTIVE: To develop a deep features-based model to classify benign and malignant breast lesions on full- filed digital mammography. METHODS: The data of full-filed digital mammography in both craniocaudal view and mediolateral oblique view from 106 patients with breast neoplasms were analyzed. Twenty-three handcrafted features (HCF) were extracted from the images of the breast tumors and a suitable feature set of HCF was selected using t-test. The deep features (DF) were extracted from the 3 pre-trained deep learning models, namely AlexNet, VGG16 and GoogLeNet. With abundant breast tumor information from the craniocaudal view and mediolateral oblique view, we combined the two extracted features (DF and HCF) as the two-view features. A multi-classifier model was finally constructed based on the combined HCF and DF sets. The classification ability of different deep learning networks was evaluated. RESULTS: Quantitative evaluation results showed that the proposed HCF+DF model outperformed HCF model, and AlexNet produced the best performances among the 3 deep learning models. CONCLUSIONS: The proposed model that combines DF and HCF sets of breast tumors can effectively distinguish benign and malignant breast lesions on full-filed digital mammography.
OBJECTIVE: To develop a deep features-based model to classify benign and malignant breast lesions on full- filed digital mammography. METHODS: The data of full-filed digital mammography in both craniocaudal view and mediolateral oblique view from 106 patients with breast neoplasms were analyzed. Twenty-three handcrafted features (HCF) were extracted from the images of the breast tumors and a suitable feature set of HCF was selected using t-test. The deep features (DF) were extracted from the 3 pre-trained deep learning models, namely AlexNet, VGG16 and GoogLeNet. With abundant breast tumor information from the craniocaudal view and mediolateral oblique view, we combined the two extracted features (DF and HCF) as the two-view features. A multi-classifier model was finally constructed based on the combined HCF and DF sets. The classification ability of different deep learning networks was evaluated. RESULTS: Quantitative evaluation results showed that the proposed HCF+DF model outperformed HCF model, and AlexNet produced the best performances among the 3 deep learning models. CONCLUSIONS: The proposed model that combines DF and HCF sets of breast tumors can effectively distinguish benign and malignant breast lesions on full-filed digital mammography.
Entities:
Keywords:
breast tumors; computer-aided diagnosis; deep learning; full-filed digital mammography; radiomics
Authors: Virendra Kumar; Yuhua Gu; Satrajit Basu; Anders Berglund; Steven A Eschrich; Matthew B Schabath; Kenneth Forster; Hugo J W L Aerts; Andre Dekker; David Fenstermacher; Dmitry B Goldgof; Lawrence O Hall; Philippe Lambin; Yoganand Balagurunathan; Robert A Gatenby; Robert J Gillies Journal: Magn Reson Imaging Date: 2012-08-13 Impact factor: 2.546
Authors: Constance D Lehman; Robert F Arao; Brian L Sprague; Janie M Lee; Diana S M Buist; Karla Kerlikowske; Louise M Henderson; Tracy Onega; Anna N A Tosteson; Garth H Rauscher; Diana L Miglioretti Journal: Radiology Date: 2016-12-05 Impact factor: 11.105
Authors: Hoo-Chang Shin; Holger R Roth; Mingchen Gao; Le Lu; Ziyue Xu; Isabella Nogues; Jianhua Yao; Daniel Mollura; Ronald M Summers Journal: IEEE Trans Med Imaging Date: 2016-02-11 Impact factor: 10.048