| Literature DB >> 35785059 |
Hussah Nasser AlEisa1, Wajdi Touiti2, Amel Ali ALHussan1, Najib Ben Aoun3,4, Ridha Ejbali2, Mourad Zaied2, Ayesha Saadia5.
Abstract
In this paper, a new classification approach of breast cancer based on Fully Convolutional Networks (FCNs) and Beta Wavelet Autoencoder (BWAE) is presented. FCN, as a powerful image segmentation model, is used to extract the relevant information from mammography images. It will identify the relevant zones to model while WAE is used to model the extracted information for these zones. In fact, WAE has proven its superiority to the majority of the features extraction approaches. The fusion of these two techniques have improved the feature extraction phase and this by keeping and modeling only the relevant and useful features for the identification and description of breast masses. The experimental results showed the effectiveness of our proposed method which has given very encouraging results in comparison with the states of the art approaches on the same mammographic image base. A precision rate of 94% for benign and 93% for malignant was achieved with a recall rate of 92% for benign and 95% for malignant. For the normal case, we were able to reach a rate of 100%.Entities:
Mesh:
Year: 2022 PMID: 35785059 PMCID: PMC9246636 DOI: 10.1155/2022/8044887
Source DB: PubMed Journal: Comput Intell Neurosci
List of studies used different deep segmentation methods of breast cancer mammograms images.
| Ref# | Year | Segmentation method | Segmentation accuracy (dice coefficient index) | Classifier | Dataset | Classification accuracy |
|---|---|---|---|---|---|---|
| [ | 2020 | Vanilla U-net | 95.1% | VGG-16 | CBIS-DDSM, INbreast, UCHCDM, BCDR-01 | 92.6% |
| [ | 2019 | RU-Net | 98.3% | ResNet | INbreast | 98.7% |
| [ | 2019 | U-Net integrated AGs | 82.24% | — | DDSM | 78.38% |
| [ | 2018 | FrCN | 92.69% | CNN | INbreast | 95.64% |
| [ | 2015 | CRF | 90% | — | DDSM-BCRP and INbreast | — |
| [ | 2020 | cGAN | 98% | CNN based on BI-RADS | Abreast | 97.85% |
| [ | 2020 | DSPAE | — | Linear classifier | MIAS | 97.54% |
Figure 1Illustration of the approach.
Figure 2Original and cropped image.
Figure 3Images resulting from normalization and removing artifacts phases.
Figure 4CLAHE enhancement
Figure 5FCN8s model architecture.
Figure 6General architecture of an FCN [33].
Figure 7FCN8s segmentation results.
Figure 8DSWAE with two layers.
Figure 9Learning curves of training the autoencoder model.
Figure 10Autoencoder images reconstruction results.
Figure 11Learning curves of training hybrid classification autoencoder model. Table 2 shows the confusion matrix presenting the classification rates resulting from our approach.
Classification rate evaluation.
| Global accuracy | Approach | |
|---|---|---|
| Abdelhafiz et al. [ | 0.926 | VGG-16 |
| Tsochatzidis et al. [ | 0.81 | Content-based image retrieval approach |
| Rouhi et al. [ | 0.79 | Region growing and CNN segmentation |
| Xie et al. [ | 0.68 | ELM |
| Our approach | 0.95 | FCN + WAE |
Classification metrics.
| Precision | Recall | |
|---|---|---|
| Benign | 0.94 | 0.92 |
| Malignant | 0.93 | 0.95 |
| Normal | 100 | 100 |
Confusion matrix.
| Benign | Malignant | Normal | ||
|---|---|---|---|---|
| Predicted | Benign | 473 | 41 | 0 |
| Malignant | 30 | 555 | 0 | |
| Normal | 0 | 0 | 500 |