| Literature DB >> 35800239 |
Amal H Alharbi1, C V Aravinda2, Jyothi Shetty2, Mohamed Yaseen Jabarulla3, K B Sudeepa2, Sitesh Kumar Singh4.
Abstract
The most common human parasite as per the medical experts is the malarial disease, which is caused by a protozoan parasite, and Plasmodium falciparum, a common parasite in humans. A microscopist with expertise in malaria diagnosis must conduct this complex procedure to identify the stages of infection. This epidemic is an ongoing disease in some parts of the world, which is commonly found. A Kaggle repository was used to upload the data collected from the NIH portal. The dataset contains 27558 samples, of which 13779 samples carry parasites and 13779 samples do not. This paper focuses on two of the most common deep transfer learning methods. Unlike other feature extractors, VGG-19's fine-tuning and pretraining made it an ideal feature extractor. Several image classification models, including VGG-19, have been pretrained on larger datasets. Additionally, deep learning strategies based on pretrained models are proposed for detecting malarial parasite cases in the early stages, in addition to an accuracy rating of 98.34 ∗ 0.51%.Entities:
Mesh:
Year: 2022 PMID: 35800239 PMCID: PMC9200540 DOI: 10.1155/2022/9171343
Source DB: PubMed Journal: Contrast Media Mol Imaging ISSN: 1555-4309 Impact factor: 3.009
Figure 1A random sample of cell images infected/not infected with malaria [2].
Figure 2Morphological filters applied to malaria cells.
Figure 3Proposed CNN-model.
Figure 4Model accuracy performance of CNN model.
Figure 5Model loss performance of CNN model.
Performance test.
| Metrics | Performance (%) |
|---|---|
| Testing-accuracy | 95.56 |
| F1 Score | 96.45 |
| A∪C score | 95.45 |
| Sensitivity | 96.65 |
| Specificity | 95.25 |
Convolution neural network model.
| Layer | Output | Parameter |
|---|---|---|
| i_1 | [( |
|
| c2d | ( |
|
| mp2d | ( |
|
| c2d_1 | ( |
|
| mp2d _1 | ( |
|
| c2d2 | ( |
|
| m2d_2 | ( |
|
| Fl | ( |
|
| Dl | ( | 14746112 |
| Dt | ( |
|
| dns_1 | ( | 262656 |
| Dot | ( | d05 |
| dense_2 | ( | 513 |
| Total | — | 15,102,529 |
| Trainable | — | 15,102,529 |
| Non-trainable | — | 0 |
Note: _1 = “input1”, c2d = “convolutional2d”, mp2d = “ max_poolint2d”, c2d_1 = “convolutional2d1”, mp2d _1 = “max_poolint2d1”, c2d2 = “convolutional2d2 “, m2d_2 = “max_poolint2d2 “, f1 = “Flatten Layers”, d1 = “ dropout”, dt = “ dense1”, dns_1 = “dense2”, dot = “dropout1”. io1 = “3”, co1 = “ “, mpo2 = “ 32”, c2do1 = “64 “, mp2o1 = “64 “, c2o2 = “128 “, m2do = “ 128”,flo = “ 28800”, det5 = “512 “, dot5 = “512 “, dnst5 = “ 512”, p0 = “0”, p1 = “ 896”, p01 = “0”, p2 = “ 18496”, p02 = “0”, p3 = “ 73856”, p03 = “0”, p4 = “ 0”, p05 = “0”, d05 = “0”.
Figure 6Accuracy performance of modified CNN model.
Figure 7Loss performance of modified CNN model.
Pretrained convolution neural network model (VGG-19).
| Layer | Output | Parameter |
|---|---|---|
| input_2 | [( | 0 |
| b1-c1 | ( |
|
| b1-c2 | ( |
|
| b1-pool | ( | 0 |
| b2-c1 | ( |
|
| b2-c2 | ( |
|
| b2-pool | ( | 0 |
| b3-c1 | ( |
|
| b3-c2 | ( |
|
| b3-c3 | ( |
|
| b3_c4 | ( |
|
| b3-pool | ( | 0 |
| b4-c1 | ( |
|
| b4-c2 | ( |
|
| b4-c3 | ( |
|
| b4-c4 | ( |
|
| b4-pool | ( | 0 |
| b5-c1 | ( |
|
| b5-c2 | ( |
|
| b5-c3 | ( |
|
| b5-c4 | ( |
|
| b5-pool | ( | 0 |
| fla_1 | ( | 0 |
| Dse-3 | ( |
|
| Dt-2 | ( | 0 |
| Dse-4 | ( | 262656 |
| Dt-3 | ( | 0 |
| Dse-5 | ( | 513 |
| Total params: | 22,647,361 | |
| Trainable params: | 2,622,977 | |
| Nontrainable: | 20,024,384 |
Note: b1-c1 and b1-c2 = “ block1_conv1 and block1_conv2”, b2-c1 and b2-c2 = “ block2_conv1 and block2_conv2”, b3-c1 and b3-c2 and b3-c3 and b3-c4 = ” block3_conv1 and block3_conv2 and block3_conv3 and block3_conv4”, b5-c1 and b5-c2 and b5-c3 and b5-c4 and b5-c5 = ” block5_conv1 and block5_conv2 and block5_conv3 and block5_conv4”. fla1 = ” flatten_1” t5 = 4608. dse3, dse4, dse5 = ” dense_3, dense_4, dense_5”, dt2 = ” droupout_2.
Figure 8Sample augmented images.
Figure 9Augmentation accuracy results.
Figure 10Augmentation loss results.
Figure 11Confusion matrix results.
Confusion matrix-based analyses.
| Models | Accuracy | F1 score | Precision | Recall |
|---|---|---|---|---|
| Basic CNN | 0.9397 ± 0.23 | 0.9397 ± 0.13 | 0.9397 ± 0.19 | 0.9397 ± 0.27 |
| VGG-19 frozen | 0.9486 ± 0.13 | 0.9482 ± 0.12 | 0.9456 ± 0.15 | 0.9480 ± 0.12 |
| VGG-19 fine-tuned | 0.9704 ± 0.06 | 0.9640 ± 0.06 | 0.9740 ± 0.07 | 0.9700 ± 0.03 |
The performance report of the model classification.
| Precision | Recall | F1 score | Support | |
|---|---|---|---|---|
| Healthy sample | 0.97 | 0.96 | 0.96 | 4085 |
| Malaria-sample | 0.96 | 0.96 | 0.95 | 4173 |
| Micro-average | 0.97 | 0.97 | 0.97 | 8158 |
| Macro-average | 0.97 | 0.97 | 0.97 | 8158 |
| Weighted-average | 0.97 | 0.97 | 0.97 | 8158 |