| Literature DB >> 34337066 |
Epimack Michael1, He Ma1, Hong Li1, Frank Kulwa1, Jing Li2.
Abstract
Early breast cancer detection is one of the most important issues that need to be addressed worldwide as it can help increase the survival rate of patients. Mammograms have been used to detect breast cancer in the early stages; if detected in the early stages, it can drastically reduce treatment costs. The detection of tumours in the breast depends on segmentation techniques. Segmentation plays a significant role in image analysis and includes detection, feature extraction, classification, and treatment. Segmentation helps physicians quantify the volume of tissue in the breast for treatment planning. In this work, we have grouped segmentation methods into three groups: classical segmentation that includes region-, threshold-, and edge-based segmentation; machine learning segmentation; and supervised and unsupervised and deep learning segmentation. The findings of our study revealed that region-based segmentation is frequently used for classical methods, and the most frequently used techniques are region growing. Further, a median filter is a robust tool for removing noise. Moreover, the MIAS database is frequently used in classical segmentation methods. Meanwhile, in machine learning segmentation, unsupervised machine learning methods are more frequently used, and U-Net is frequently used for mammogram image segmentation because it does not require many annotated images compared with other deep learning models. Furthermore, reviewed papers revealed that it is possible to train a deep learning model without performing any preprocessing or postprocessing and also showed that the U-Net model is frequently used for mammogram segmentation. The U-Net model is frequently used because it does not require many annotated images and also because of the presence of high-performance GPU computing, which makes it easy to train networks with more layers. Additionally, we identified mammograms and utilised widely used databases, wherein 3 and 28 are public and private databases, respectively.Entities:
Year: 2021 PMID: 34337066 PMCID: PMC8321730 DOI: 10.1155/2021/9962109
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.411
Figure 1Mammogram image segmentation pipeline.
Summary of merits and demerits of mammograms segmentation methods.
| Category | Merits | Demerits |
|---|---|---|
| Edge-based segmentation methods | Works well when an edge is prominent | Sensitivity to noise |
| Reduces overall contrast in mammograms | ||
| Easy to find locally edge orientation | ||
| Produce unsatisfactory results when it detects fake and weak edges in mammograms | ||
| Not suitable for mammogram images having smooth edges | ||
| Threshold-based segmentation methods | Simple and easy to implement | It is not applicable if the tumour area ratio is unknown |
| Sensitive to noise in mammograms | ||
| Faster | ||
| Gives poor results when mammograms have low contrast | ||
| Inexpensive | ||
| Difficulties to fix the threshold value if the number of regions increases | ||
| Not easy to process the mammogram whose histograms are nearly unimodal | ||
| Region-based segmentation | Connected regions are guaranteed | Causes over segmentation if mammograms are noisy |
| Multiple criterion and gives good results with less noise | Cannot distinguish the shading of the real mammograms | |
| Time consuming due to the high resolution of mammograms | ||
| Not suitable for noisy mammograms | ||
| Seed point must be selected | ||
|
| ||
| Unsupervised machine learning methods | Few data are required | Number of clusters must be defined |
| Easy to implement | ||
| Prior information required | ||
| Automatic segment masses | ||
| Supervised machine learning methods | Easy to detect error | Knowledge about the mammogram to be segmented is required |
| Require lab data | ||
|
| ||
| Deep learning methods | Solve complex tasks | Limited annotated data |
| Required unlabeled data | Time consuming during training | |
| Expensive because it requires higher computational machines | ||
| Produce accurate results | ||
Summary of reviewed works related to classical segmentation in mammogram image.
| Subcategory | Related works | Year | Technique | Filter | Database | Evaluation metric |
|---|---|---|---|---|---|---|
| RM | [ | 1999 | Adaptive and region growing | Gaussian | UMH | 98.0% accuracy |
| RM | [ | 2001 | Region growing | Kalman | DDSM | 93.0% ROC with adaptive module and 86.0% ROC without the adaptive module |
| RM | [ | 2001 | Partial loss of region | Sober | Japanese | 97.0% true positive |
| RM | [ | 2004 | Region growing | MIAS | 90.0% TPR, and 1.3 FTR per image | |
| RM | [ | 2004 | Contour searching | MAGIC-5 | 85.6 ± 08% ROC | |
| RM | [ | 2005 | Region growing | ANN | MIAS | 92.5% accuracy |
| RM | [ | 2006 | Morphological algorithm | Median | MIAS | 95.0% detection rate |
| RM | [ | 2010 | Harris corner | Median | MIAS | 93.0% segmentation accuracy |
| RM | [ | 2010 | Region growing | DDSM | 78.0% sensitivity and 4.0% false positive | |
| RM | [ | 2010 | Watershed | Morphological | DDSM | Mean standard 0.93 ± 0.03 |
| RM | [ | 2011 | Thresholding | Median | MIAS | 99.0% segmentation accuracy |
| RM | [ | 2012 | Region growing | Contrast | MIAS | 94.59% sensitivity and 3.90 false positive |
| RM | [ | 2012 | Morphological | Median | MIAS | 95.0% detection rate |
| RM | [ | 2012 | Region growing | Adaptive | DDSM | 97.2% sensitivity and 1.83% false positive |
| RM | [ | 2012 | Seed point selection | Mathematical morphology | NCSM | 98.0% accuracy |
| RM | [ | 2013 | Morphological gradient watershed | Adaptive median | MIAS and NMR | 95.3% positive for MIAS and 94.0% for NMR |
| RM | [ | 2013 | Improved watershed | Median | MIAS | 92.0% accuracy |
| RM | [ | 2013 | Otsu | Morphological | DEMS | 95.06% accuracy |
| RM | [ | 2014 | Marker-controlled watershed | Sober | MIAS | 90.83% detection rate and 91.3% ROC |
| RM | [ | 2014 | Wavelet and genetic algorithm | Wiener | MIAS and DDSM | 79.2 ± 8% mean and standard deviation |
| RM | [ | 2014 | Watershed transformation | MSKE | 90.47% sensitivity, 75.0% specificity, and 84.848% accuracy | |
| RM | [ | 2015 | Morphological operators | Alternating sequential filter | MIAS | 99.2% sensitivity and 99.0% accuracy |
| RM | [ | 2017 | Region growing | Sliding window | MIAS | 91.3% accuracy |
| RM | [ | 2017 | Region growing | Median | MIAS | 94.0% accuracy |
| RM | [ | 2017 | Watershed | Morphological | DDSM | 80.5% similarity index, 75.7% overlap value |
| RM | [ | 2017 | Bimodal-level set formulation | MIAS | 96.72% precision and 97.22% recall | |
| RM | [ | 2018 | Hidden Markov and region growing | MIAS | 91.92% accuracy and 8.07% error | |
| RM | [ | 2018 | Watershed combined with | Sober | MIAS | 83.33% accuracy |
| RM | [ | 2018 | Region growing | Gaussian | DDSM | 98.1% sensitivity, 97.8% specificity, and 90.0% accuracy |
| RM | [ | 2019 | Watershed | MIAS | 94.0% false detection and 18.0% positive detection | |
|
| ||||||
| TM | [ | 2001 | Otsu thresholding | Morphological | MIAS | 1.7188 ME1, 0.0083 ME2, and 0.8702 MHD |
| TM | [ | 2001 | Otsu | Median | MIAS | 96.55% accuracy, 96.97% sensitivity, and 96.29% specificity |
| TM | [ | 2011 | MIAS | 97.0% accuracy, 97.03% specificity, and 97.0% sensitivity | ||
| TM | [ | 2012 | Histogram thresholding | Morphological | DDSM | 96.0% detection rate and 90.0% accuracy |
| TM | [ | 2012 | Kittler's optimal thresholding | BCCCF | 92.0% to 95.0% Spearman and 6.9% average density | |
| TM | [ | 2013 | Otsu | Median | ||
| TM | [ | 2014 | Rough set theory | Median | MIAS | |
| TM | [ | 2014 | Otsu thresholding | Morphological and median | DDSM | |
| TM | [ | 2014 | Threshold and evolutionary | Average | DDSM | 95.2% accuracy |
| TM | [ | 2014 | Otsu | Median | MIAS | |
| TM | [ | 2015 | Global threshold | Median | MIAS | 92.86% accuracy and acceptable level of 4.97% |
| TM | [ | 2015 | Global thresholding and merging | Wiener | 82.0% accuracy and 18.0% error detection | |
| TM | [ | 2016 | Morphological threshold | Median | MIAS | 94.54% accuracy and 5.45% false identification |
| TM | [ | 2016 | Adaptive threshold | 91.5% accuracy for SVM and 70.0% accuracy for | ||
| TM | [ | 2016 | Otsu | Morphological | WHC and DDSM | 100.0% accuracy for WHC and 91.30% for DDSM |
| TM | [ | 2017 | Otsu | Clahe | MIAS | 96.0% accuracy |
| TM | [ | 2017 | Histogram and edge detection | Gaussian | MIAS and EPIC | 98.8% accuracy (MIAS) and 91.5% (EPIC) |
| TM | [ | 2018 | Adaptive global and local threshold | Meteorological | MIAS | 91.3% sensitivity and 0.71% false positive |
|
| ||||||
| EM | [ | 2004 | Edge | 2-D | MIAS | 92.5% accuracy, 93.0% sensitivity, and 85.0% specificity |
| EM | [ | 2006 | Edge | MAGIC-5 collaboration | 86.20% ROC and 82.0% sensitivity | |
| EM | [ | 2009 | Histogram | Morphological | MIAS | 97.0% accuracy |
| EM | [ | 2011 | Active contour | Binary homogeneity | MIAS | 99.6% CM, 98.7% CR, and 98.3% quality |
| EM | [ | 2011 | Energy minimisation and contour | MIAS | 90.0% accuracy and 92.27% precision | |
| EM | [ | 2011 | Edge | Median | KHCCJH | 94.1% accuracy (CC), 81.4% MLO, and 90.0% accuracy |
| EM | [ | 2011 | Sobel, Prewitt, Laplacian | Adobe Photoshop | NCSM | 79.0% AUC for Sobel, 72.0% Prewitt, and 71.0% Laplacian |
| EM | [ | 2012 | Edge | Median | MIAS | 83.9% accuracy |
| EM | [ | 2014 | Active contour | 88.0% sensitivity | ||
| EM | [ | 2015 | Dynamic graph cut | MIAS and DDSM | 98.88% sensitivity, 98.89% specificity, and 93.0% for negative values | |
| EM | [ | 2015 | Canny edge detection | Median | MIAS, INbreast, and BCDR | 98.8% Dice boundary of 97.8% MIAS, 98.9% for boundary 89.6% INbreast, and 99.2% for boundary, and 91.9% BCDR |
| EM | [ | 2017 | Cascade | Gabor | UHGL | 100.0% sensitivity and 3.4% false positives |
| EM | [ | 2017 | Edge | NCSM | 84.0% AUC | |
Figure 2Result of segmented masses. Row (1) shows original images; row (2) shows images after median filtering, cropping, and border removal; row (3) shows the results of the Otsu method; row (4) shows the result of the Otsu method with image smoothing; row (5) shows the result of the Otsu method with Laplacian edge information; and row (6) shows the mass extraction from the original image [107].
Figure 3(a) Original image, (b) histogram of the original image, (c) processed image, and (d) histogram of the processed image [109].
Figure 4Segmentation and detection result on mammogram image by proposed method: (a) original image, (b) smoothed image, (c) patch image after thresholding, (d) cancer region found in input image in window, (e) region patch found after morphological closing, (f) region boundary using gradient, (g) cancer area detected, (h) cancer area with region segmentation, and (i) proposed segmentation result of cancer in input mammogram image [109].
Figure 5Prepossessing and segmentation results of the proposed method [110].
Summary of reviewed works on supervised and unsupervised machine learning.
| Subcategory | Related works | Year | Technique | Filter | Database | Evaluation metric |
|---|---|---|---|---|---|---|
| USML | [ | 2012 | Clustering | 2-D median | MIAS | 90.0% sensitivity and 78.0% specificity |
| USML | [ | 2012 | Microcalcification clusters | BNHMJ | 91.4% segmentation accuracy, false positive 96.5% | |
| USML | [ | 2013 | FCM clustering | Morphological | MIAS | |
| USML | [ | 2013 | Microcalcification clusters | DDSM | 93.2% positive rate and 0.73 false positive | |
| USML | [ | 2014 |
| 5 × 5 median | MIAS | 94.4% sensitivity |
| USML | [ | 2015 | Fuzzy | MIAS | 83.3% for class 1, 75.0% class 2, and 80.0% class 3 accuracy | |
| USML | [ | 2017 | FCM clustering | MIAS | 86.2% sensitivity, 96.4% specificity, and 94.6% accuracy | |
| USML | [ | 2018 | MC clusters | Morphological | DDSM and MIAS | 94.48% classification accuracy for DDSM and 100.0% for MIAS |
| USML | [ | 2018 | Fuzzy | MIAS | 98.82% detection | |
| USML | [ | 2018 |
| MIAS | 98.1% accuracy | |
| USML | [ | 2018 | Classic and fuzzy morphology | Gaussian | MIAS | 0.86 Dice, 66.0% recall and 20% precision |
| USML | [ | 2018 |
| LoG | MIAS and PHP | 95.0% accuracy PHP and 94.0% MIAS |
| USML | [ | 2018 | Morphological | DDSM and MIAS | 98.0% accuracy for MIAS and 97.0% for DDSM accuracy | |
| USML | [ | 2018 | Hierarchical | DDSM | 38.8% accuracy and 61.1% testing error | |
| USML | [ | 2018 | MC clusters | Morphological | DDSM | 96.57% sensitivity and 94.25% accuracy |
|
| ||||||
| SML | [ | 2011 | MLP | DDSM | 68.2% sensitivity and 8.7% false positive per image | |
| SML | [ | 2012 | ELM | MIAS | 81.10% of accuracy | |
| SML | [ | 2015 | Structure SVM | DDSM and INbreast | 87.0% Dice | |
| SML | [ | 2015 | SSVM and CRF | DDSM and INbreast | 93.0% accuracy using CRF and 95.0% accuracy using SVM | |
| SML | [ | 2015 | SVM | Median filter | SSPS | 96.0% correlation |
| SML | [ | 2016 | GGD and Bayesian back propagation | MIAS | 97.08% detection for GGD and 97.0% for Bayesian | |
| SML | [ | 2017 | CRF and SSVM | DDSM and INbreast | 10.0% loss | |
Figure 6(a) Accuracy, (b) sensitivity, and (c) specificity [142].
Figure 7U-Net mode [162].
Summary of reviewed works on deep learning models.
| Subcategory | Related works | Year | Technique | Filter | Database | Evaluation metric |
|---|---|---|---|---|---|---|
| DL | [ | 2015 | CRF | INbreast and DDSM-BCRP | 89.0% of Dice | |
| DL | [ | 2018 | Adversarial FCN-CRF | INbreast and DDSM-BCRP | 97.0% accuracy | |
| DL | [ | 2018 | FrCN | INbreast | 92.97 segmentation accuracy, 92.69% Dice and MCC of 85.93% | |
| DL | [ | 2018 | CRU-Net | INbreast and DDSM | 93.66% of Dice for INbreast and 93.32% for DDSM | |
| DL | [ | 2019 | ResCU-Net and MS-ResCU-Net | INbreast | 91.78% of Dice, 94.16% accuracy, and Jac of 85.12% based on MS-ResCU-Net | |
| DL | [ | 2019 | U-Net and AGS | DDSM | 82.24% | |
| DL | [ | 2019 | RU-Net | cLare filter | INbreast and DDSM-BCRP | 98.0% of Dice, 94.0% of IOU, and 98.0% accuracy |
| DL | [ | 2019 | U-Net | Laplace filter | DDSM | 97.80% of Dice and 98.50% of |
| DL | [ | 2019 | AUNet | INbreast and DDSM | 81.80% of Dice for DDSM and DI of 79.10% for INbreast | |
| DL | [ | 2020 | Mask RCNN | INbreast | 88.0% of Dice | |
| DL | [ | 2020 | FrCN | INbreast | 92.69% of Dice, 92.97% accuracy, and Jac of 86.37% | |
| DL | [ | 2020 | U-Net | Adaptive median | INbreast and DDSM | 89.0% of Dice and mean IOU of 90.90% |
| DL | [ | 2020 | DS-U-Net | cLare filter | INbreast and DDSM | 82.7% of Dice, Jac of 99.7%, and accuracy of 83.0% |
| DL | [ | 2020 | cGAN | Median filter | INbreast | 88.0% of Dice, Jac of 78.0%, and 98.0% accuracy |
| DL | [ | 2020 | cGAN | Morphological filter | DDSM | 94.0% of Dice and IOU of 87.0% |
| DL | [ | 2020 | Mask RCNN and DeepLab | Savitzky Golay filter | MIAS and DDSM | 80.0% accuracy |
| DL | [ | 2020 | Mask RCCN-FPN | DDSM | 91.0% accuracy and 84.0% precision | |
| DL | [ | 2020 | U-Net | DDSM | 79.39% of Dice, AUC of 86.40%, and 85.95% of accuracy | |
| DL | [ | 2021 | U-Net | DDSM | 88.0% accuracy | |
| DL | [ | 2021 | U-Net | MIAS and DDSM | 98.87% of Dice, AUC of 98.88%, and |