| Literature DB >> 36072731 |
Yahya Alqahtani1, Umakant Mandawkar2, Aditi Sharma3, Mohammad Najmus Saquib Hasan4, Mrunalini Harish Kulkarni5, R Sugumar6.
Abstract
The use of an automatic histopathological image identification system is essential for expediting diagnoses and lowering mistake rates. Although it is of enormous clinical importance, computerized breast cancer multiclassification using histological pictures has rarely been investigated. A deep learning-based classification strategy is suggested to solve the challenge of automated categorization of breast cancer pathology pictures. The attention model that acts on the feature channel is the channel refinement model. The learned channel weight may be used to reduce superfluous features when implementing the feature channel. To increase classification accuracy, calibration is necessary. To increase the accuracy of channel recalibration findings, a multiscale channel recalibration model is provided, and the msSE-ResNet convolutional neural network is built. The multiscale properties flow through the network's highest pooling layer. The channel weights obtained at different scales are delivered into line fusion and used as input to the next channel recalibration model, which may improve the results of channel recalibration. The experimental findings reveal that the spatial recalibration model fares poorly on the job of classifying breast cancer pathology pictures when applied to the semantic segmentation of brain MRI images. The public BreakHis dataset is used to conduct the experiment. The network performs benign/malignant breast pathology picture classification collected at various magnifications with a classification accuracy of 88.87 percent, according to experimental data. The diseased images are also more resilient. Experiments on pathological pictures at various magnifications show that msSE-ResNet34 is capable of performing well when used to classify pathological images at various magnifications.Entities:
Mesh:
Year: 2022 PMID: 36072731 PMCID: PMC9444358 DOI: 10.1155/2022/7075408
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Residual structure.
Figure 2Residual structure and SE residual structure.
Figure 3msSE residual structure.
Distribution of pictures under different magnifications and categories.
| Gain | Number of tumor images | ||
|---|---|---|---|
| Benign | Malignant | Total | |
| 40 Times | 750 | 1644 | 2394 |
| 100 Times | 773 | 1725 | 2498 |
| 200 Times | 748 | 1668 | 2416 |
| 400 Times | 706 | 1479 | 2184 |
Figure 4Benign and malignant breast tumor images. (a) Benign breast tumor image. (b) Malignant breast tumor image.
Comparison of msSE-ResNet18 and other networks' categorization outcomes.
| Model |
| AUC |
|---|---|---|
| ResNet18 | 84.53 | 0.8878 |
| SE-ResNet18 | 83.56 | 0.8791 |
| scSE-ResNet18 | 83.90 | 0.8677 |
| msSE-ResNet18-2way | 86.81 | 0.9266 |
| msSE-ResNet18-3way | 86.00 | 0.9107 |
Figure 5Comparison of accuracy between msSE-ResNet34 and other networks.
Comparison of magnification-related categorization results for all networks.
| Model | Number |
|---|---|
| ResNet18 | 1 |
| SE-ResNet18 | 2 |
| scSE-ResNet18 | 3 |
| msSE-ResNet18-2way | 4 |
| msSE-ResNet18-3way | 5 |
Comparison of magnification-related classification results for all networks.
| Model | 40 Times | 100 Times | 200 Times | 400 Times | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
| |
| 1 | 0.822 | 0.845 | 0.907 | 0.836 | 0.836 | 0.921 | 0.864 | 0.868 | 0.947 | 0.875 | 0.864 | 0.967 |
| 2 | 0.826 | 0.820 | 0.956 | 0.862 | 0.861 | 0.953 | 0.867 | 0.862 | 0.962 | 0.879 | 0.865 | 0.973 |
| 3 | 0.805 | 0.808 | 0.941 | 0.836 | 0.845 | 0.935 | 0.870 | 0.866 | 0.962 | 0.824 | 0.837 | 0.918 |
| 4 | 0.862 | 0.890 | 0.912 | 0.862 | 0.884 | 0.921 | 0.880 | 0.887 | 0.947 | 0.889 | 0.889 | 0.957 |
| 5 | 0.829 | 0.856 | 0.902 | 0.868 | 0.878 | 0.940 | 0.874 | 0.905 | 0.913 | 0.882 | 0.884 | 0.951 |
Comparison of categorization results of fusion methods under different number of feature scales.
| Number of scales | Fusion method |
|
|
|---|---|---|---|
| 2 | Add | 85.42 | 85.07 |
| 2 | Max | 83.55 | 82.13 |
| 2 | Cat1(sign) | 84.10 | 82.96 |
| 2 | Cat1 | 85.38 | 84.64 |
| 2 | Cat2(sign) | 83.93 | 82.76 |
| 2 | Cat2 | 84.61 | 82.88 |
| 3 | Add | 85.00 | 83.71 |
| 3 | Max | 83.45 | 82.22 |
| 3 | Cat1(sign) | 83.76 | 82.09 |
| 3 | Cat1 | 85.61 | 84.28 |
| 3 | Cat2(sign) | 83.57 | 82.39 |
| 3 | Cat2 | 84.50 | 83.25 |
Figure 6Comparison of AUC between msSE-ResNet34 and other networks.
Comparison of categorization results between msSE-ResNet34 and other networks.
| Model |
| AUC |
|---|---|---|
| ResNet34 | 86.47 | 0.9135 |
| SE-RseNet34 | 87.36 | 0.9097 |
| scSE-ResNet34 | 83.96 | 0.8722 |
| msSE-ResNet34-2way | 88.06 | 0.9308 |
| msSE-ResNet34-3way | 88.87 | 0.9541 |
Comparison of magnification-related classification results for all networks.
| Model | 40 Times | 100 Times | 200 Times | 400 Times | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
| |
| 1 | 0.822 | 0.845 | 0.907 | 0.836 | 0.836 | 0.921 | 0.864 | 0.868 | 0.947 | 0.875 | 0.864 | 0.967 |
| 2 | 0.826 | 0.820 | 0.956 | 0.862 | 0.861 | 0.953 | 0.867 | 0.862 | 0.962 | 0.879 | 0.865 | 0.973 |
| 3 | 0.805 | 0.808 | 0.941 | 0.836 | 0.845 | 0.935 | 0.870 | 0.866 | 0.962 | 0.824 | 0.837 | 0.918 |
| 4 | 0.862 | 0.890 | 0.912 | 0.862 | 0.884 | 0.921 | 0.880 | 0.887 | 0.947 | 0.889 | 0.889 | 0.957 |
| 5 | 0.829 | 0.856 | 0.902 | 0.868 | 0.878 | 0.940 | 0.874 | 0.905 | 0.913 | 0.882 | 0.884 | 0.951 |
Comparison of classification results of each fusion method under different number of feature scales.
| Number of scales | Fusion method |
|
|
|---|---|---|---|
| 2 | Add | 86.28 | 86.30 |
| 2 | Max | 85.65 | 85.97 |
| 2 | Cat1(sign) | 84.88 | 85.01 |
| 2 | Cat1 | 86.86 | 86.28 |
| 2 | Cat2(sign) | 85.29 | 85.26 |
| 2 | Cat2 | 87.40 | 85.90 |
| 3 | Add | 85.89 | 85.43 |
| 3 | Max | 86.44 | 86.87 |
| 3 | Cat1(sign) | 85.89 | 85.77 |
| 3 | Cat1 | 86.59 | 86.36 |
| 3 | Cat2(sign) | 85.69 | 86.54 |
| 3 | Cat2 | 87.29 | 87.09 |
Figure 7Comparison of classification results of each fusion method under different number of feature scales.