Xiaolei Qu1,2, Yao Shi1, Yaxin Hou3, Jue Jiang4. 1. School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China. 2. Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China. 3. Department of Diagnostic Ultrasound, Beijing Tongren Hospital, Capital Medical University, Beijing, 100730, China. 4. Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, NY, 10065, USA.
Abstract
PURPOSE: Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention-supervised full-resolution residual network (ASFRRN) to segment tumors from BUS images. METHODS: In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full-resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat-sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated. RESULTS: For Dataset A, the proposed method achieved higher Dice (84.3 ± 10.0%), JSC (75.2 ± 10.7%), and F1 score (84.3 ± 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 ± 13.0%), JSC (83.7 ± 14.8%), and F1 score (90.7 ± 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 ± 13.1%), JSC (83.3 ± 14.8%), and F1 score (90.5 ± 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M). CONCLUSIONS: We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number.
PURPOSE:Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention-supervised full-resolution residual network (ASFRRN) to segment tumors from BUS images. METHODS: In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full-resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat-sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated. RESULTS: For Dataset A, the proposed method achieved higher Dice (84.3 ± 10.0%), JSC (75.2 ± 10.7%), and F1 score (84.3 ± 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 ± 13.0%), JSC (83.7 ± 14.8%), and F1 score (90.7 ± 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 ± 13.1%), JSC (83.3 ± 14.8%), and F1 score (90.5 ± 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M). CONCLUSIONS: We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number.
Authors: Karen Drukker; Maryellen L Giger; Karla Horsch; Matthew A Kupinski; Carl J Vyborny; Ellen B Mendelson Journal: Med Phys Date: 2002-07 Impact factor: 4.071
Authors: Moi Hoon Yap; Gerard Pons; Joan Marti; Sergi Ganau; Melcior Sentis; Reyer Zwiggelaar; Adrian K Davison; Robert Marti; Gerard Pons; Joan Marti; Sergi Ganau; Melcior Sentis; Reyer Zwiggelaar; Adrian K Davison; Robert Marti Journal: IEEE J Biomed Health Inform Date: 2017-08-07 Impact factor: 5.772
Authors: Wendie A Berg; Andriy I Bandos; Ellen B Mendelson; Daniel Lehrer; Roberta A Jong; Etta D Pisano Journal: J Natl Cancer Inst Date: 2015-12-28 Impact factor: 13.506