| Literature DB >> 36068309 |
Panyun Zhou1, Yanzhen Cao2, Min Li3,4, Yuhua Ma5,6, Chen Chen3,7, Xiaojing Gan2, Jianying Wu8, Xiaoyi Lv9,10,11,12,13, Cheng Chen14.
Abstract
Histopathological image analysis is the gold standard for pathologists to grade colorectal cancers of different differentiation types. However, the diagnosis by pathologists is highly subjective and prone to misdiagnosis. In this study, we constructed a new attention mechanism named MCCBAM based on channel attention mechanism and spatial attention mechanism, and developed a computer-aided diagnosis (CAD) method based on CNN and MCCBAM, called HCCANet. In this study, 630 histopathology images processed with Gaussian filtering denoising were included and gradient-weighted class activation map (Grad-CAM) was used to visualize regions of interest in HCCANet to improve its interpretability. The experimental results show that the proposed HCCANet model outperforms four advanced deep learning (ResNet50, MobileNetV2, Xception, and DenseNet121) and four classical machine learning (KNN, NB, RF, and SVM) techniques, achieved 90.2%, 85%, and 86.7% classification accuracy for colorectal cancers with high, medium, and low differentiation levels, respectively, with an overall accuracy of 87.3% and an average AUC value of 0.9.In addition, the MCCBAM constructed in this study outperforms several commonly used attention mechanisms SAM, SENet, SKNet, Non_Local, CBAM, and BAM on the backbone network. In conclusion, the HCCANet model proposed in this study is feasible for postoperative adjuvant diagnosis and grading of colorectal cancer.Entities:
Mesh:
Year: 2022 PMID: 36068309 PMCID: PMC9448811 DOI: 10.1038/s41598-022-18879-1
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Different differentiation types of colorectal cancer by digital pathological imager at 40 times magnification. (a) Highly differentiated. (b) Moderately differentiated. (c) Highly differentiated.
Patient information sheet.
| Information | Value |
|---|---|
| Male | 60 (number) |
| Female | 45 (number) |
| Male | 57.97 (average age) |
| Female | 55.23 (average age) |
| High differentiation (I) | 35 |
| Medium differentiation (II) | 35 |
| Low differentiation (III) | 35 |
| High differentiation | 210 |
| Medium differentiation | 210 |
| Low differentiation | 210 |
Figure 2The framework of HCCANet. (a) The overall architecture of HCCANet. (b) The backbone of HCCANet.
Figure 3The framework of MCCBAM. (a) The overall architecture of MCCBAM. (b) Components of the Spatial Attention Block. (c) Components of the Channel Attention Block.
Hyperparameter settings for each classifier.
| Model name | Hyper-parameters |
|---|---|
| ResNet50 | Input size: (224, 224, 3), Learning rate: 0.005, Epochs: 100 Optimizer: Adam ( Loss function: Categorical Cross-Entropy |
| MobileNetV2 | |
| Xception | |
| DenseNet121 | |
| KNN | Neighbors: 5 |
| RF | Estimators: 850, Random state: 0, Bootstrap: True |
| NB | Alpha: 1.0 |
| SVM | Kernel: RBF, C: 1.0, Gamma: 0.005 |
Performance metric calculation formulas.
| Performance metric | Precision | Recall | F1-score | Accuracy |
|---|---|---|---|---|
| Formula |
Comparison of the denoising effect of different filtering techniques.
| Filter type/kernel size | Grading | Precision | Recall | F1-score | Accuracy |
|---|---|---|---|---|---|
| Mean filtering/3 | I | 0.87 | 0.952 | 0.91 | 0.865 |
| II | 0.872 | 0.81 | 0.84 | ||
| III | 0.854 | 0.833 | 0.843 | ||
| Mean filtering/5 | I | 0.868 | 0.786 | 0.825 | 0.810 |
| II | 0.775 | 0.738 | 0.756 | ||
| III | 0.792 | 0.905 | 0.844 | ||
| Mean filtering/7 | I | 1.00 | 0.810 | 0.895 | 0.833 |
| II | 0.816 | 0.738 | 0.775 | ||
| III | 0.741 | 0.952 | 0.833 | ||
| Median filtering/3 | I | 0.804 | 0.881 | 0.841 | 0.810 |
| II | 0.767 | 0.786 | 0.776 | ||
| III | 0.865 | 0.762 | 0.810 | ||
| Median filtering/5 | I | 0.860 | 0.881 | 0.871 | 0.841 |
| II | 0.795 | 0.833 | 0.814 | ||
| III | 0.872 | 0.810 | 0.840 | ||
| Median filtering/7 | I | 0.826 | 0.905 | 0.864 | 0.817 |
| II | 0.861 | 0.738 | 0.795 | ||
| III | 0.773 | 0.810 | 0.791 | ||
| Bilateral filtering/3 | I | 0.860 | 0.881 | 0.871 | 0.825 |
| II | 0.848 | 0.667 | 0.747 | ||
| III | 0.780 | 0.929 | 0.848 | ||
| Bilateral Filtering / 5 | I | 0.854 | 0.833 | 0.843 | 0.817 |
| II | 0.729 | 0.833 | 0.778 | ||
| III | 0.892 | 0.786 | 0.835 | ||
| Gaussian filtering/3 | I | 0.917 | 0.786 | 0.846 | 0.810 |
| II | 0.723 | 0.810 | 0.764 | ||
| III | 0.814 | 0.833 | 0.824 | ||
| Gaussian filtering/5 | I | 0.902 | 0.881 | 0.892 | 0.873 |
| II | 0.850 | 0.810 | 0.829 | ||
| III | 0.867 | 0.929 | 0.897 | ||
| Gaussian filtering/7 | I | 0.860 | 0.881 | 0.871 | 0.857 |
| II | 0.833 | 0.833 | 0.833 | ||
| III | 0.878 | 0.857 | 0.867 |
Figure 4(a) Accuracy of HCCANet based on different filters. (b) AUC values for HCCANet based on different filters.
Comparison of MCCBAM with other attention mechanisms.
| Attention mechanism | Grading | Precision | Recall | F1-score | Accuracy |
|---|---|---|---|---|---|
| SAM | I | 0.769 | 0.714 | 0.741 | 0.754 |
| II | 0.698 | 0.714 | 0.706 | ||
| III | 0.795 | 0.833 | 0.814 | ||
| SENet | I | 0.696 | 0.929 | 0.796 | 0.762 |
| II | 0.806 | 0.595 | 0.685 | ||
| III | 0.821 | 0.762 | 0.790 | ||
| SKNet | I | 0.878 | 0.857 | 0.867 | 0.833 |
| II | 0.761 | 0.833 | 0.795 | ||
| III | 0.872 | 0.810 | 0.840 | ||
| Non_Local | I | 0.857 | 0.857 | 0.857 | 0.841 |
| II | 0.846 | 0.786 | 0.815 | ||
| III | 0,0.822 | 0.881 | 0.851 | ||
| CBAM | I | 0.850 | 0.810 | 0.829 | 0.817 |
| II | 0.786 | 0.786 | 0.786 | ||
| III | 0.818 | 0.857 | 0.837 | ||
| BAM | I | 0.923 | 0.857 | 0.889 | 0.833 |
| II | 0.767 | 0.786 | 0.776 | ||
| III | 0.818 | 0.857 | 0.837 | ||
| MCCBAM | I | 0.902 | 0.881 | 0.892 | 0.873 |
| II | 0.850 | 0.810 | 0.829 | ||
| III | 0.867 | 0.929 | 0.897 |
Figure 5(a) Accuracy of VGG16 based on different attention mechanisms. (b) AUC values of VGG16 based on different attention mechanisms.
CNN-based classifiers for comparison.
| Classifier | Grading | Precision | Recall | F1-Score | Accuracy |
|---|---|---|---|---|---|
| ResNet50 | I | 0.786 | 0.786 | 0.786 | 0.778 |
| II | 0.714 | 0.714 | 0.714 | ||
| III | 0.833 | 0.833 | 0.833 | ||
| MobileNetV2 | I | 0.727 | 0.571 | 0.640 | 0.690 |
| II | 0.608 | 0.738 | 0.667 | ||
| III | 0.762 | 0.762 | 0.762 | ||
| Xception | I | 0.676 | 0.595 | 0.632 | 0.619 |
| II | 0.511 | 0.571 | 0.539 | ||
| III | 0.690 | 0.690 | 0.690 | ||
| DenseNet121 | I | 0.789 | 0.714 | 0.750 | 0.722 |
| II | 0.644 | 0.690 | 0.667 | ||
| III | 0.744 | 0.762 | 0.753 | ||
| KNN | I | 0.865 | 0.762 | 0.810 | 0.746 |
| II | 0.675 | 0.643 | 0.659 | ||
| III | 0.714 | 0.833 | 0.769 | ||
| RF | I | 0.791 | 0.810 | 0.800 | 0.786 |
| II | 0.757 | 0.667 | 0.709 | ||
| III | 0.804 | 0.881 | 0.841 | ||
| NB | I | 0.703 | 0.619 | 0.658 | 0.643 |
| II | 0.583 | 0.500 | 0.583 | ||
| III | 0.642 | 0.810 | 0.716 | ||
| SVM | I | 0.805 | 0.786 | 0.795 | 0.769 |
| II | 0.689 | 0.738 | 0.713 | ||
| III | 0.825 | 0.786 | 0.805 | ||
| HCCANet | I | 0.902 | 0.881 | 0.892 | 0.873 |
| II | 0.850 | 0.810 | 0.829 | ||
| III | 0.867 | 0.929 | 0.897 |
Figure 6(a) Accuracy of different models in grading histopathological images. (b) AUC values for different models on histopathological image grading.
Figure 7Histopathological images of colorectal cancer and its corresponding CAMs.