| Literature DB >> 35891030 |
Wei-Chung Shia1,2, Fang-Rong Hsu2, Seng-Tong Dai2, Shih-Lin Guo3, Dar-Ren Chen3,4.
Abstract
In this study, an advanced semantic segmentation method and deep convolutional neural network was applied to identify the Breast Imaging Reporting and Data System (BI-RADS) lexicon for breast ultrasound images, thereby facilitating image interpretation and diagnosis by providing radiologists an objective second opinion. A total of 684 images (380 benign and 308 malignant tumours) from 343 patients (190 benign and 153 malignant breast tumour patients) were analysed in this study. Six malignancy-related standardised BI-RADS features were selected after analysis. The DeepLab v3+ architecture and four decode networks were used, and their semantic segmentation performance was evaluated and compared. Subsequently, DeepLab v3+ with the ResNet-50 decoder showed the best performance in semantic segmentation, with a mean accuracy and mean intersection over union (IU) of 44.04% and 34.92%, respectively. The weighted IU was 84.36%. For the diagnostic performance, the area under the curve was 83.32%. This study aimed to automate identification of the malignant BI-RADS lexicon on breast ultrasound images to facilitate diagnosis and improve its quality. The evaluation showed that DeepLab v3+ with the ResNet-50 decoder was suitable for solving this problem, offering a better balance of performance and computational resource usage than a fully connected network and other decoders.Entities:
Keywords: breast cancer; computer-aided diagnosis; deep convolutional neural network; semantic segmentation; ultrasonic imaging
Mesh:
Year: 2022 PMID: 35891030 PMCID: PMC9323504 DOI: 10.3390/s22145352
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Summary of related semantic segmentation studies with their modalities and results.
| References | Topic | Classes Identified | Dataset Size | Results |
|---|---|---|---|---|
| [ | Incorporating the Breast Imaging Reporting and Data System lexicon with a fully convolutional network for malignancy detection on breast ultrasound | Malignant BI-RADS lexicons (shadowing, taller-than-wide, angular margins, micro-lobulation, hypo-echogenicity and duct extension) | 378 (204 benign and 174 malignant images) | (In FCN-32s) |
| [ | Fuzzy semantic segmentation of breast ultrasound image with breast anatomy constraints | Fat layer, mammary layer, muscle layer and tumour region | 325 (Mixed from two heterogeneity datasets) | (In FCN with fuzzy layer and proposed CRFs) |
| [ | A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumours in ultrasound | Benign/Malignant Tumour | 3061 (Mixed from four heterogeneity datasets) | (In ResNet 18) |
| [ | Semantic segmentation with DenseNets for breast tumour detection | Tumour Region | 100 (From 78 patients) | Accuracy: 99.2 |
| [ | Dilated semantic segmentation for breast ultrasonic lesion detection using parallel feature fusion | Tumour Region | 780 * (Benign: 487, Malignant: 210, Normal: 133) | (In DenseNet-201) |
| [ | Automatic semantic segmentation of breast tumours in ultrasound images based on combining fuzzy logic and deep learning—a feasibility study | Tumour Region | 400 * (Benign: 200, Malignant: 200) | (In DeepLab V3+ with ResNet18) |
| [ | Segmentation and recognition of breast ultrasound images based on an expanded U-Net | Tumour Region | 192 (177 benign tumour images, 23 malignant tumour images) | Dice coefficient: 90.5 |
IU: intersection over union, BF score: boundary F1 score. * Same image data source.
Figure 1Flowchart of the study.
Figure 2Network architecture of DeepLab V3+. The different rate (atrous rates) of convolutional layers in atrous convolution helps to enhance the field-of-view of the model.
Figure 3Network architecture of ResNet-50.
Figure 4Network architecture of Inception-ResNet-v2. Illustrations of the reduction block structure were omitted.
Figure 5Network architecture of Xception.
Figure 6Network architecture of MobileNet-V2.
Patient and image characteristics.
| Characteristics | Benign (n = 190) | Malignant (n = 153) |
|---|---|---|
| Age of patients (y) | 47.35 (45.21–49.49) | 53.51 (51.13–55.69) |
| Malignant tissues | ||
| DCIS | - | 34 (22.22%) |
| IDC | - | 119 (77.78%) |
| Benign tumours | ||
| LCIS | 7 (3.68%) | - |
| Fibroadenoma | 55 (28.95%) | - |
| Fibrocystic change | 48 (25.26%) | - |
| Adenosis | 5 (2.63%) | - |
| Fibroepithelial lesion | 49 (25.79%) | - |
| Other | 26 (13.69%) | - |
DCIS: ductal carcinoma in situ; LCIS: lobular carcinoma in situ; IDC: invasive ductal carcinoma.
Semantic segmentation performance and average run time results for DeepLab v3+ with ResNet-50/Inception-ResNet-v2/Mobilenet-v2/Xception and FCN-32s.
| Network | Global | Mean | Mean | Weighted | Mean BF | Average Run Time (mins) |
|---|---|---|---|---|---|---|
| DeepLab v3+ | 90.67 | 44.04 | 34.92 | 84.36 | 59.79 | 130.74 |
| DeepLab v3+ | 89.96 | 34.12 | 28.56 | 83.01 | 58.94 | 183.55 |
| DeepLab v3+ | 89.13 | 25.59 | 22.19 | 80.88 | 57.47 | 96.15 |
| DeepLab v3+ | 88.64 | 25.23 | 21.36 | 80.48 | 57.26 | 141.52 |
| FCN-32s | 89.95 | 30.69 | 26.67 | 82.55 | 60.48 | 163.13 |
IU: intersection over union, BF score: boundary F1 score.
Figure 7Normalised confusion matrix of the classification performance in DeepLab v3+ with the selected three decoders and FCN-32s, based on the selected six BI-RADS lexicons. The rate of correct recognition for each lexicon is shown in percentage. (a) Classification performance of DeepLab v3+ with ResNet-50; (b) classification performance of DeepLab v3+ with Inception-ResNet-v2; (c) classification performance of DeepLab v3+ with MobileNet-v2; (d) classification performance of the FCN-32s.
Figure 8The ROC curve and AUC of the classification performance of DeepLab v3+ with the three selected decoders and FCN-32s, based on the six selected BI-RADS features. BI-RADS: Breast Imaging Reporting and Data System, ROC: receiver operating characteristic curve, AUC: area under curve. (a) Classification performance of DeepLab v3+ with ResNet-50; (b) classification performance of DeepLab v3+ with Inception-ResNet-v2; (c) classification performance of DeepLab v3+ with MobileNet-v2; (d) classification performance of FCN-32s.
Figure 9The sample of semantic segmentation visualisation for the malignant tumour ultrasound images after applying DeepLab v3+, and compared to FCN-32s. The sample US image and corresponding ground truth are shown in the two columns on the far right. The visualisation result from the first column on the left to the right: DeepLab v3+ with ResNet-50, DeepLab v3+ with Inception-ResNet-v2, FCN-32s. The semantically segmented regions based on BI-RADS lexicons are filled in the image with different colours. Red: angular margins, Green: hypoechogenicity, Yellow: taller-than-wide, Blue: duct extension, Navy Blue: shadowing, Purple: micro-lobulation.