| Literature DB >> 35590961 |
Wen Chen1, Weiming Shen1, Liang Gao1, Xinyu Li1.
Abstract
Artificial intelligence (AI) technologies have resulted in remarkable achievements and conferred massive benefits to computer-aided systems in medical imaging. However, the worldwide usage of AI-based automation-assisted cervical cancer screening systems is hindered by computational cost and resource limitations. Thus, a highly economical and efficient model with enhanced classification ability is much more desirable. This paper proposes a hybrid loss function with label smoothing to improve the distinguishing power of lightweight convolutional neural networks (CNNs) for cervical cell classification. The results strengthen our confidence in hybrid loss-constrained lightweight CNNs, which can achieve satisfactory accuracy with much lower computational cost for the SIPakMeD dataset. In particular, ShufflenetV2 obtained a comparable classification result (96.18% in accuracy, 96.30% in precision, 96.23% in recall, and 99.08% in specificity) with only one-seventh of the memory usage, one-sixth of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). GhostNet achieved an improved classification result (96.39% accuracy, 96.42% precision, 96.39% recall, and 99.09% specificity) with one-half of the memory usage, one-quarter of the number of parameters, and one-fiftieth of total flops compared with Densenet-121 (96.79% in accuracy). The proposed lightweight CNNs are likely to lead to an easily-applicable and cost-efficient automation-assisted system for cervical cancer diagnosis and prevention.Entities:
Keywords: cervical cancer diagnosis; deep learning; hybrid loss function; lightweight convolutional neural networks; medical imaging
Mesh:
Year: 2022 PMID: 35590961 PMCID: PMC9101629 DOI: 10.3390/s22093272
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1The overview of the proposed method.
Figure 2Samples categories from the SIPaKMeD dataset.
Data distribution of the cells in categories of SIPaKMeD dataset.
| Categories | Number of Image Samples |
|---|---|
| Superficial/Intermediate | 813 |
| Parabasal | 787 |
| Metaplastic | 793 |
| Koilocytotic | 825 |
| Dyskeratotic | 813 |
| Total | 4049 |
The classification performances of four lightweight CNNs with traditional loss and hybrid loss.
| Accuracy | Precision | Recall | Specificity | ||
|---|---|---|---|---|---|
| Squeezenet | Traditional loss | 93.85 | 93.96 | 93.87 | 98.46 |
| Hybrid loss | 94.52 | 94.63 | 94.54 | 98.62 | |
| MobilenetV2 | Traditional loss | 87.35 | 87.52 | 87.41 | 96.84 |
| Hybrid loss | 83.10 | 83.41 | 83.22 | 95.78 | |
| ShufflenetV2 | Traditional loss | 95.61 | 95.66 | 95.61 | 98.89 |
| Hybrid loss | 96.18 | 96.30 | 96.23 | 99.08 | |
| Ghostnet | Traditional loss | 95.45 | 95.52 | 95.44 | 98.85 |
| Hybrid loss | 96.39 | 96.42 | 96.39 | 99.09 | |
Figure 3Confusion matrixes of Ghostnet and ShufflenetV2 trained with and without hybrid loss (“*” means the CNN model was trained with hybrid loss function). Class 1, superficial/intermediate; Class 2, parabasal; Class 3, metaplastic; Class 4, koilocytotic; Class 5, dyskeratotic. (a) Ghostnet; (b) Ghostnet*; (c) ShufflenetV2; (d) ShufflenetV2*.
Comparison results of the proposed method with existing methods for the SIPaKMeD dataset.
| Accuracy | Total Parameters | Total Memory (M) | Total Flops (GB) | |
|---|---|---|---|---|
| Alexnet [ | 93.58 | 6.11 × 107 | 4.19 M | 0.70 GB |
| VGG [ | 95.35 | 13.84 × 107 | 109.39 M | 15.50 GB |
| Resnet-101 [ | 94.86 | 4.45 × 107 | 161.75 M | 7.84 GB |
| Densenet-121 [ | 96.79 | 0.80 × 107 | 147.10 M | 2.88 GB |
| Densenet-121+GCN [ | 98.37 |
|
|
|
| ShufflenetV2+HL | 96.18 | 0.13 × 107 | 20.84 M | 0.15 GB |
| Ghostnet+HL | 96.39 | 0.40 × 107 | 40.05 M | 0.15 GB |
Notes: p* > 0.80 × 107, m* > 147.10 M, f* > 2.88 GB; ShufflenetV2 + HL indicates ShufflenetV2 trained with the proposed hybrid loss function; Ghostnet + HL indicates Ghostnet trained with the proposed hybrid loss function.