Literature DB >> 33059238

A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound.

Wilfrido Gómez-Flores1, Wagner Coelho de Albuquerque Pereira2.   

Abstract

The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNet18, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s>0.90 and IoU>0.81. In the case of U-Net, the segmentation performance is F1s=0.89 and IoU=0.80, whereas FCN-AlexNet attains the lowest results with F1s=0.84 and IoU=0.73. In particular, ResNet18 obtains F1s=0.905 and IoU=0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNet18 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.
Copyright © 2020 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Breast tumors; Breast ultrasound; Convolutional neural networks; Semantic segmentation; Transfer learning

Year:  2020        PMID: 33059238     DOI: 10.1016/j.compbiomed.2020.104036

Source DB:  PubMed          Journal:  Comput Biol Med        ISSN: 0010-4825            Impact factor:   4.589


  4 in total

1.  BUSnet: A Deep Learning Model of Breast Tumor Lesion Detection for Ultrasound Images.

Authors:  Yujie Li; Hong Gu; Hongyu Wang; Pan Qin; Jia Wang
Journal:  Front Oncol       Date:  2022-03-25       Impact factor: 6.244

2.  Comparing deep learning-based automatic segmentation of breast masses to expert interobserver variability in ultrasound imaging.

Authors:  Jeremy M Webb; Shaheeda A Adusei; Yinong Wang; Naziya Samreen; Kalie Adler; Duane D Meixner; Robert T Fazzio; Mostafa Fatemi; Azra Alizad
Journal:  Comput Biol Med       Date:  2021-10-21       Impact factor: 4.589

3.  Automatic Detection of Liver Cancer Using Hybrid Pre-Trained Models.

Authors:  Esam Othman; Muhammad Mahmoud; Habib Dhahri; Hatem Abdulkader; Awais Mahmood; Mina Ibrahim
Journal:  Sensors (Basel)       Date:  2022-07-20       Impact factor: 3.847

4.  Semantic Segmentation of the Malignant Breast Imaging Reporting and Data System Lexicon on Breast Ultrasound Images by Using DeepLab v3.

Authors:  Wei-Chung Shia; Fang-Rong Hsu; Seng-Tong Dai; Shih-Lin Guo; Dar-Ren Chen
Journal:  Sensors (Basel)       Date:  2022-07-18       Impact factor: 3.847

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.