| Literature DB >> 35870877 |
Wei Wang1, Yun Tian2, Yang Xu3, Xiao-Xuan Zhang3, Yan-Song Li3, Shi-Feng Zhao3, Yan-Hua Bai4.
Abstract
BACKGROUND: Cervical cancer cell detection is an essential means of cervical cancer screening. However, for thin-prep cytology test (TCT)-based images, the detection accuracies of traditional computer-aided detection algorithms are typically low due to the overlapping of cells with blurred cytoplasmic boundaries. Some typical deep learning-based detection methods, e.g., ResNets and Inception-V3, are not always efficient for cervical images due to the differences between cervical cancer cell images and natural images. As a result, these traditional networks are difficult to directly apply to the clinical practice of cervical cancer screening.Entities:
Keywords: Adaptive anchors; Backbone network; Cervical cancer detection; Feature fusion; Loss function
Mesh:
Year: 2022 PMID: 35870877 PMCID: PMC9308346 DOI: 10.1186/s12880-022-00852-z
Source DB: PubMed Journal: BMC Med Imaging ISSN: 1471-2342 Impact factor: 2.795
Fig. 1Overall flow of the proposed 3cDe-Net
Fig. 2Network structure of the proposed DC-ResNet
Fig. 3Group convolution
Fig. 4Residual group convolution block
Fig. 5Residual dilated convolution block
Fig. 6IoU calculation diagram
Fig. 7Example images from the two datasets
Structure and parameters of DC-ResNet
| Improved backbone: DC-ResNet | |||
| Residual group convolution | Group 32 | ||
| Group 32 | |||
| Group 32 | |||
| Residual dilated convolution | Dilation 2 Stride 2 | ||
Dilation 2 Stride 2 | |||
| fc-1024 | |||
| fc-256 | |||
| fc-2 | |||
Quantitative comparison obtained on the Data-H dataset
| Method | H-means (%) | Sensitivity (%) | Specificity (%) | F1 (%) | Accuracy (%) |
|---|---|---|---|---|---|
| ResNet-50 | 96.82 | 96.68 | 96.98 | 96.82 | 96.83 |
| ResNet-101 | 96.75 | 97.12 | 96.37 | 96.76 | 96.75 |
| DC-ResNet | 97.11 | 95.92 | 98.34 | 97.09 | 97.13 |
Fig. 8Accuracy and loss curves of DC-ResNet
Fig. 9Confusion matrix of DC-ResNet
Quantitative comparison on the Data-H dataset
| Method | Accuracy |
|---|---|
| Inception-v3 [ | 89.66 ± 1.89% |
| ResNet-152 [ | 90.87 ± 1.48% |
| Feature concatenation [ | 92.63 ± 1.68% |
| DC-ResNet | 96.7% ± 1.1% |
Fig. 10Examples of correct recognition results obtained by DC-ResNet
Detection results of 3cDe-Net
| Improved anchor | Improved loss | DC-ResNet | mAP@0.5 (%) |
|---|---|---|---|
| 47.3 | |||
| √ | 48.1 | ||
| √ | √ | 49.3 | |
| √ | √ | √ | 50.4 |
Fig. 11Detection examples of 3cDe-Net
mAP results of different backbone networks
| Backbone | mAP@0.5 (%) | mAP@0.75 (%) |
|---|---|---|
| ResNet-50 | 45.4 | 26.2 |
| ResNet-101 | 45.5 | 25.9 |
| DC-ResNet | 46.7 | 26.5 |
Results of FPN fusion with feature maps from different layers
| The number of fusion feature map | mAP@0.5 (%) |
|---|---|
| 1, 2, 3, 4 | 46.0 |
| 1, 2, 3, 5 | 46.5 |
| 1, 2, 4, 5 | 46.1 |
| 1, 2, 3, 4, 5 | 46.7 |