| Literature DB >> 33997017 |
Venkatesan Chandran1, M G Sumithra1, Alagar Karthick2, Tony George3, M Deivakani4, Balan Elakkiya5, Umashankar Subramaniam6, S Manoharan7.
Abstract
Traditional screening of cervical cancer type classification majorly depends on the pathologist's experience, which also has less accuracy. Colposcopy is a critical component of cervical cancer prevention. In conjunction with precancer screening and treatment, colposcopy has played an essential role in lowering the incidence and mortality from cervical cancer over the last 50 years. However, due to the increase in workload, vision screening causes misdiagnosis and low diagnostic efficiency. Medical image processing using the convolutional neural network (CNN) model shows its superiority for the classification of cervical cancer type in the field of deep learning. This paper proposes two deep learning CNN architectures to detect cervical cancer using the colposcopy images; one is the VGG19 (TL) model, and the other is CYENET. In the CNN architecture, VGG19 is adopted as a transfer learning for the studies. A new model is developed and termed as the Colposcopy Ensemble Network (CYENET) to classify cervical cancers from colposcopy images automatically. The accuracy, specificity, and sensitivity are estimated for the developed model. The classification accuracy for VGG19 was 73.3%. Relatively satisfied results are obtained for VGG19 (TL). From the kappa score of the VGG19 model, we can interpret that it comes under the category of moderate classification. The experimental results show that the proposed CYENET exhibited high sensitivity, specificity, and kappa scores of 92.4%, 96.2%, and 88%, respectively. The classification accuracy of the CYENET model is improved as 92.3%, which is 19% higher than the VGG19 (TL) model.Entities:
Mesh:
Year: 2021 PMID: 33997017 PMCID: PMC8112909 DOI: 10.1155/2021/5584004
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.411
Summary of the related works for screening cervical cancer.
| S.no | Methods | Dataset | Advantages | Disadvantages |
|---|---|---|---|---|
| 1 | Inception V3 model [ | Herlev dataset | (i) High accuracy | (i) The deep network needs further study to investigate cervical cells. |
|
| ||||
| 2 | Transfer learning, pretrained DenseNet [ | Fujian Maternal and child health hospital Kaggle | (i) More feasibility and effective | (i) Limited data |
|
| ||||
| 3 | CNN-extreme learning machine- (ELM-) based system [ | Herlev dataset | (i) Fast learning | (i) More complexity |
|
| ||||
| 4 | Gene-assistance module, voting strategy [ | Chinese hospital and Universitario De Caracas, Venezuela | (i) More scalable and practical | (i) Limited datasets |
|
| ||||
| 5 | Random forest and Adaboost [ | Radiotherapy dataset | (i) Better treatment planning | (i) Need to extract features |
|
| ||||
| 6 | ColpoNet [ | Colposcopy images | (i) Better accuracy | (i) Need to improve accuracy by extracting relevant information |
|
| ||||
| 7 | CNN Model [ | Papanicolaou-stained cervical smear dataset | (i) Better sensitivity and specificity | (i) Reported 1.8% false-negative images |
|
| ||||
| 8 | Fourier transform and machine learning methods. [ | Microscopic images | (i) Fully automatic system | (i) The level of complexity is more |
|
| ||||
| 9 | CNN-SVM model [ | Herlev and one private dataset | (i) Good robustness | (i) Need improvement to adjust parameter |
|
| ||||
| 10 | Stacked Autoencoder [ | UCI database | (i) High accuracy | (i) Training time is very high due to reducing the dimension |
|
| ||||
| 11 | PSO with KNN algorithm [ | Cervical smear images | (i) Better accuracy | (i) Time-consuming due to two-phase feature selection |
|
| ||||
| 12 | Ensemble model [ | PAP smear image | (i) For 2 class problem achieves the accuracy of 96% | (i) Overall of cells are difficult to identify |
|
| ||||
| 13 | Multimodal deep network [ | National Cancer Institute | (i) Good correlation | (i) More complexity in image fusion |
Figure 1Flow chart of the proposed CYENET model for diagnosis of cervical cancer.
Figure 2Dataset samples of type 1, type 2, and type 3 classes.
Figure 3Strucutre of the proposed CYENET model.
Description of network architecture of the CYENET model.
| Layer No. | Layer type | Filter size | Stride | No. of filters | FC units | Input | Output |
|---|---|---|---|---|---|---|---|
| 1 | Convolution 1 | 5 × 5 | 2 × 2 | 64 | — | 3 × 227 × 227 | 64 × 112 × 112 |
| 2 | Max-pool_1 | 3 × 3 | 2 × 2 | — | — | 64 × 112 × 112 | 64 × 56 × 56 |
| 3 | Convolution 2 | 1 × 1 | 1 × 1 | 64 | — | 64 × 56 × 56 | 64 × 56 × 56 |
| 4 | Convolution 3 | 3 × 3 | 1 × 1 | 128 | — | 64 × 56 × 56 | 128 × 56 × 56 |
| 5 | Max-pool_2 | 3 × 3 | 2 × 2 | — | — | 128 × 56 × 56 | 128 × 28 × 28 |
| 6 | Parallel convolution 1 | 1 × 1, 3 × 3, 5 × 5 | 1 × 1 | 32 ⊕ 64 ⊕ 128 | — | 128 × 28 × 28 | 224 × 28 × 28 |
| 7 | Max-pool_3 | 3 × 3 | 2 × 2 | — | — | 224 × 28 × 28 | 224 × 14 × 14 |
| 8 | Parallel convolution 2 | 1 × 1, 3 × 3, 5 × 5 | 1 × 1 | 32 ⊕ 64 ⊕ 128 | — | 224 × 14 × 14 | 224 × 14 × 14 |
| 9 | Parallel convolution 3 | 1 × 1, 3 × 3, 5 × 5 | 1 × 1 | 32 ⊕ 64 ⊕ 128 | — | 224 × 14 × 14 | 224 × 14 × 14 |
| 10 | Max-pool_4 | 3 × 3 | 2 × 2 | — | — | 224 × 14 × 14 | 224 × 7 × 7 |
| 11 | Parallel convolution 4 | 1 × 1, 3 × 3, 5 × 5 | 1 × 1 | 32 ⊕ 64 ⊕ 128 | — | 224 × 7 × 7 | 224 × 7 × 7 |
| 12 | Max-pool_5 | 5 × 5 | 1 × 1 | — | — | 224 × 7 × 7 | 224 × 2 × 2 |
| 13 | Fully connected 1 | — | — | — | 512 | ||
| 14 | Fully connected 2 | — | — | — | 3 |
Figure 4(a) Sample input images and (b) augmented images using different techniques.
Figure 5(a) Feature map of the convolutional layer with (a) 1 filter and (b) 64 filter.
Figure 6Training accuracy plot for the CYENET and VGG 19 (TL) model.
Figure 7Validation accuracy plot for the CYENET and VGG 19 (TL) model.
Figure 8Training and validation loss curve for the CYENET and VGG_19 (TL) model.
Figure 9Confusion chart of the proposed CYENET.
Comparative experiment results of proposed architecture with different models.
| Model name | Accuracy (%) | Sensitivity (%) | Specificity (%) | PPV (%) | NPV (%) | Ref |
|---|---|---|---|---|---|---|
| DenseNet-121 | 72.42 | 59.86 | 76.83 | 48.39 | 84.52 | [ |
| DenseNet-169 | 69.79 | 65.00 | 71.48 | 44.84 | 85.31 | [ |
| Colponet | 81.0 | — | — | — | — | [ |
| SVM | 63.27 | 38.46 | 71.85 | 32.43 | 76.87 | [ |
| Inception-Resnet-v2 | 69.3 | 66.70 | 70.6 | 47.20 | 84.00 | [ |
| CYENET | 92.30 | 92.40 | 96.20 | 92.00 | 95.00 | Present study |
| VGG19 (TL) | 73.30 | 33.00 | 79.00 | 70.00 | 88.00 | Present study |
Figure 10Performance metric comparison of systems.
Figure 11PPV and NPV curve of VGG_19 (TL) (a) and CYENET (b).
Comparative results of proposed architecture with several parameters and run time.
| Model name | Number of parameters | Run time (per epoch) |
|---|---|---|
| DenseNet-121 [ | 7978856 | 21 min 10 s |
| DenseNet-169 [ | 28681000 | 24 min 59 s |
| Colponet [ | 6977000 | 16 min 27 s |
| Inception-Resnet-v2 [ | 55843161 | 15 min 36 s |
| CYENET | 8465376 | 3 min 32 s |
| VGG19 (TL) | 123642856 | 5 min 24 s |
Figure 12Occlusion sensitivity map for test data.