| Literature DB >> 34187589 |
Mohammed Aliy Mohammed1, Fetulhak Abdurahman2, Yodit Abebe Ayalew3.
Abstract
BACKGROUND: Automating cytology-based cervical cancer screening could alleviate the shortage of skilled pathologists in developing countries. Up until now, computer vision experts have attempted numerous semi and fully automated approaches to address the need. Yet, these days, leveraging the astonishing accuracy and reproducibility of deep neural networks has become common among computer vision experts. In this regard, the purpose of this study is to classify single-cell Pap smear (cytology) images using pre-trained deep convolutional neural network (DCNN) image classifiers. We have fine-tuned the top ten pre-trained DCNN image classifiers and evaluated them using five class single-cell Pap smear images from SIPaKMeD dataset. The pre-trained DCNN image classifiers were selected from Keras Applications based on their top 1% accuracy.Entities:
Keywords: CNN; Cervical cancer; Deep learning; Image classification; Pap smear
Year: 2021 PMID: 34187589 PMCID: PMC8244198 DOI: 10.1186/s42490-021-00056-6
Source DB: PubMed Journal: BMC Biomed Eng ISSN: 2524-4426
Fig. 1Common pipelines to classify CPS images
Fig. 2Training accuracy (left) and training loss (right) of the proposed classification models
Fig. 3Validation accuracy (left) and validation loss (right) of the proposed classification models
Individual and average accuracies, precisions, recalls and F1-scores of the proposed classification models evaluated using test dataset
| Accuracy | Precision | Recall | F1-score | Accuracy | Precision | Recall | F1-score | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.992 | 0.980 | 0.980 | 0.980 | 0.992 | 0.962 | 1.000 | 0.980 | ||||
| 0.964 | 0.894 | 0.930 | 0.912 | 0.958 | 0.899 | 0.890 | 0.894 | ||||
| 0.970 | 0.913 | 0.940 | 0.926 | 0.968 | 0.904 | 0.940 | 0.922 | ||||
| 0.994 | 0.990 | 0.980 | 0.985 | 0.988 | 0.980 | 0.960 | 0.970 | ||||
| 0.988 | 1.000 | 0.940 | 0.969 | 0.990 | 1.000 | 0.950 | 0.974 | ||||
| 0.982 | 0.955 | 0.954 | 0.954 | 0.979 | 0.949 | 0.948 | 0.948 | ||||
| 0.990 | 0.961 | 0.990 | 0.975 | 0.986 | 0.952 | 0.980 | 0.966 | ||||
| 0.968 | 0.920 | 0.920 | 0.920 | 0.958 | 0.891 | 0.900 | 0.896 | ||||
| 0.974 | 0.922 | 0.950 | 0.936 | 0.968 | 0.904 | 0.940 | 0.922 | ||||
| 0.986 | 0.989 | 0.940 | 0.964 | 0.992 | 0.990 | 0.970 | 0.980 | ||||
| 0.998 | 1.000 | 0.990 | 0.995 | 0.988 | 1.000 | 0.940 | 0.969 | ||||
| 0.983 | 0.959 | 0.958 | 0.958 | 0.978 | 0.947 | 0.946 | 0.946 | ||||
| 0.988 | 0.943 | 1.000 | 0.971 | 0.988 | 0.961 | 0.980 | 0.970 | ||||
| 0.964 | 0.936 | 0.880 | 0.907 | 0.964 | 0.918 | 0.900 | 0.909 | ||||
| 0.966 | 0.888 | 0.950 | 0.918 | 0.978 | 0.941 | 0.950 | 0.945 | ||||
| 0.984 | 0.979 | 0.940 | 0.959 | 1.000 | 1.000 | 1.000 | 1.000 | ||||
| 0.994 | 1.000 | 0.970 | 0.985 | 0.998 | 1.000 | 0.990 | 0.995 | ||||
| 0.979 | 0.949 | 0.948 | 0.948 | 0.986 | 0.964 | 0.964 | 0.964 | ||||
| 0.986 | 0.951 | 0.980 | 0.966 | 0.992 | 0.971 | 0.990 | 0.980 | ||||
| 0.964 | 0.918 | 0.900 | 0.909 | 0.968 | 0.912 | 0.930 | 0.921 | ||||
| 0.962 | 0.893 | 0.920 | 0.906 | 0.974 | 0.939 | 0.930 | 0.935 | ||||
| 0.994 | 0.980 | 0.990 | 0.985 | 0.996 | 1.000 | 0.990 | 0.995 | ||||
| 0.990 | 1.000 | 0.950 | 0.974 | 0.992 | 1.000 | 0.980 | 0.990 | ||||
| 0.979 | 0.949 | 0.948 | 0.948 | 0.986 | 0.964 | 0.964 | 0.964 | ||||
| 0.992 | 0.980 | 0.980 | 0.980 | 0.998 | 1.000 | 0.990 | 0.995 | ||||
| 0.962 | 0.909 | 0.900 | 0.905 | 0.974 | 0.922 | 0.950 | 0.936 | ||||
| 0.972 | 0.913 | 0.950 | 0.931 | 0.978 | 0.941 | 0.950 | 0.945 | ||||
| 0.998 | 1.000 | 0.990 | 0.995 | 0.998 | 1.000 | 0.990 | 0.995 | ||||
| 0.996 | 1.000 | 0.980 | 0.990 | 0.998 | 1.000 | 0.990 | 0.995 | ||||
| 0.984 | 0.961 | 0.960 | 0.960 | ||||||||
Fig. 4Confusion matrix for classification result on test dataset using DenseNet169
Proposed pre-trained classification models weight size and their top-1 accuracy performance on the ImageNet’s validation dataset
| Model | Size | Top-1 Accuracy |
|---|---|---|
| NASNetLarge | 343 MB | 0.825 |
| InceptionResNetV2 | 215 MB | 0.803 |
| Xception | 88 MB | 0.790 |
| ResNet152V2 | 232 MB | 0.780 |
| InceptionV3 | 92 MB | 0.779 |
| DenseNet201 | 80 MB | 0.773 |
| ResNet101V2 | 171 MB | 0.772 |
| ResNet152 | 232 MB | 0.766 |
| ResNet101 | 171 MB | 0.764 |
| DenseNet169 | 57 MB | 0.762 |
Fig. 6The general pipeline of the research project: image acquisition, pre-processing, feature extraction and classification
Fig. 5Sample images from the SIPaKMeD dataset: superficial-intermediate (a), parabasal (b), koilocytotic (c), metaplastic (d) and dyskeratotic (e) cells