| Literature DB >> 34868509 |
Omar Faruk1, Eshan Ahmed1, Sakil Ahmed1, Anika Tabassum1, Tahia Tazin1, Sami Bourouis2, Mohammad Monirujjaman Khan1.
Abstract
Deep learning has emerged as a promising technique for a variety of elements of infectious disease monitoring and detection, including tuberculosis. We built a deep convolutional neural network (CNN) model to assess the generalizability of the deep learning model using a publicly accessible tuberculosis dataset. This study was able to reliably detect tuberculosis (TB) from chest X-ray images by utilizing image preprocessing, data augmentation, and deep learning classification techniques. Four distinct deep CNNs (Xception, InceptionV3, InceptionResNetV2, and MobileNetV2) were trained, validated, and evaluated for the classification of tuberculosis and nontuberculosis cases using transfer learning from their pretrained starting weights. With an F1-score of 99 percent, InceptionResNetV2 had the highest accuracy. This research is more accurate than earlier published work. Additionally, it outperforms all other models in terms of reliability. The suggested approach, with its state-of-the-art performance, may be helpful for computer-assisted rapid TB detection.Entities:
Mesh:
Year: 2021 PMID: 34868509 PMCID: PMC8639254 DOI: 10.1155/2021/1002799
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1Workflow diagram of the TB or normal image detection.
Figure 2Non-TB X-ray images.
Figure 3Tuberculosis X-ray images.
Figure 4Total number of TB and non-TB records.
Figure 5Block diagram of the proposed system.
Figure 6System architecture with InceptionResNetV2.
Parameters used for compiling various models.
| Parameters | Value |
|---|---|
| Batch size | 32 |
| Shuffling | Each epoch |
| Optimizer | Adam |
| Learning rate | 1 |
| Decay | 1 |
| Loss | Binary_crossentropy |
| Epoch | 25 |
| Execution environment | GPU |
Comparison of pretrained models.
| Model | Train accuracy | Val accuracy | Train loss | Val loss | Images | Precision | Recall | F1-score |
|---|---|---|---|---|---|---|---|---|
| Xception | 0.9596 | 0.9543 | 0.1155 | 0.1213 | Normal | 1.00 | 0.90 | 0.95 |
| TB | 0.91 | 1.00 | 0.95 | |||||
| InceptionV3 | 0.9800 | 0.9757 | 0.1160 | 0.1243 | Normal | 0.95 | 0.97 | 0.96 |
| TB | 0.97 | 0.95 | 0.96 | |||||
| MobileNetV2 | 0.9930 | 0.9793 | 0.0220 | 0.0548 | Normal | 0.96 | 1.00 | 0.98 |
| TB | 1.00 | 0.96 | 0.98 | |||||
| InceptionResNetV2 | 0.9912 | 0.9936 | 0.0340 | 0.0237 | Normal | 0.99 | 0.98 | 0.99 |
| TB | 0.98 | 0.99 | 0.99 |
Figure 7Training and validation accuracy.
Figure 8Training and validation loss.
Figure 9Confusion matrix.
Figure 10Normal prediction.
Figure 11TB prediction.
Model comparison with other research.
| This paper (model name) | Accuracy (%) | References paper (model name) | Accuracy (%) |
|---|---|---|---|
| MobileNet | 97.93 | Ref [ | 94.33 |
| (Validation or testing) | |||
| InceptionV3 | 98.00 | Ref [ | 83.57 |
| Ref [ | 98.54 | ||
| Xception | 95.96 | Ref model VGG16 | 87.71 |
| InceptionResNetV2 | 99.36 | Ref [ | 96.47 |
| (Validation or testing) | Ref [ | 98.6 | |
| MobileNet | 97.93 | Ref [ | 89.6 |