| Literature DB >> 35966878 |
Shufan Liang1,2, Jiechao Ma3, Gang Wang2, Jun Shao1, Jingwei Li1, Hui Deng1,2, Chengdi Wang1, Weimin Li1.
Abstract
With the increasing incidence and mortality of pulmonary tuberculosis, in addition to tough and controversial disease management, time-wasting and resource-limited conventional approaches to the diagnosis and differential diagnosis of tuberculosis are still awkward issues, especially in countries with high tuberculosis burden and backwardness. In the meantime, the climbing proportion of drug-resistant tuberculosis poses a significant hazard to public health. Thus, auxiliary diagnostic tools with higher efficiency and accuracy are urgently required. Artificial intelligence (AI), which is not new but has recently grown in popularity, provides researchers with opportunities and technical underpinnings to develop novel, precise, rapid, and automated implements for pulmonary tuberculosis care, including but not limited to tuberculosis detection. In this review, we aimed to introduce representative AI methods, focusing on deep learning and radiomics, followed by definite descriptions of the state-of-the-art AI models developed using medical images and genetic data to detect pulmonary tuberculosis, distinguish the infection from other pulmonary diseases, and identify drug resistance of tuberculosis, with the purpose of assisting physicians in deciding the appropriate therapeutic schedule in the early stage of the disease. We also enumerated the challenges in maximizing the impact of AI in this field such as generalization and clinical utility of the deep learning models.Entities:
Keywords: artificial intelligence; deep learning; machine learning; pulmonary tuberculosis; radiomics
Year: 2022 PMID: 35966878 PMCID: PMC9366014 DOI: 10.3389/fmed.2022.935080
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
FIGURE 1Application of artificial intelligence in tuberculosis. NTM-LD, non-tuberculous mycobacterium lung disease; CXR, chest X-ray; CT, computed tomography; PET/CT, positron emission tomography/computed tomography.
FIGURE 2The workflow of deep learning and radiomics. ROI, region of interest.
A brief summary of the included studies.
| Section | Study proportion | Purpose | Reference standard | Primary materials | Algorithm | Evaluation indicators | References |
| Tuberculosis detection | 48.5% | Diagnose pulmonary tuberculosis or disease evaluation | Pathogenic detection, radiology reports, clinical records, etc. | CXR and CT images | CNN and ML | AUC, sensitivity, specificity, accuracy, etc. | ( |
| Tuberculosis discrimination | 18.2% | Discriminate between pulmonary tuberculosis and lung cancer or NTM-LD | Pathogenic detection, pathology, or follow-up confirmation | CT and PET/CT images | CNN and radiomics | ( | |
| Tuberculosis drug resistance prediction | 33.3% | Recognize MDR-TB or drug resistance of | Drug susceptibility testing | CXR, CT images, and gene sequences | ANN, CNN, GNN, and ML | ( |
CXR, chest X-ray; CT, computed tomography; CNN, convolutional neural network; ML, machine learning; AUC, area under the curve; NTM-LD, non-tuberculous mycobacterium lung disease; PET/CT, positron emission tomography/computed tomography; MDR-TB, multi-drug resistant tuberculosis; ANN, artificial neural network; GNN, graph neural network.
Summary of AI applications in TB detection.
| No. | References | Method | Reference standard | Dataset | Study population | Training/Validation/test cohort | Model names | Algorithm | Results |
| 1 | Lakhani and Sundaram ( | Retrospective multi-center on CXR images | Sputum, radiology reports, radiologists, and clinical records. | 1,007 participants | United States, China, and Belarus | Training: 685 Validation: 172 Test: 150 | NA | CNN | AUC 0.99, Sen 97.3%, Spe 94.7%, Acc 96.0% of the ensemble method |
| 2 | Hwang et al. ( | Retrospective multi-center on CXR images | Culture or PCR | 62,433 CXR images | Korea, China, United States, etc. | Training: 60,089 Tuning: 450 Internal validation: 450 External validation: 1,444 | DLAD | CNN | AUC 0.977–1.000 for TB classification, AUAFROC 0.973–1.000 for lesion localization; Sen 0.943–1.000, Spe 0.911–1.000 at high sensitivity cutoff |
| 3 | Nijiati et al. ( | Retrospective single-center on CXR images | Symptoms, laboratory and radiological examinations | 9,628 CXR images | China | Training: 7,703 Test: 1,925 | NA | CNN | AUC 0.9902–0.9944, Sen 93.2–95.5%, Spe 95.78–98.05%, Acc 94.96–96.73% in the test set |
| 4 | Lee et al. ( | Retrospective single-center on CXR images | Smear microscopy, culture, PCR, and radiologists | 19,686 participants | Korea | Test: 19,686 | DLAD | CNN | AUC 0.999, Sen 1.000, Spe 0.959–0.997, Acc 0.96–0.997 |
| 5 | Heo et al. ( | Retrospective single-center on CXR images | Radiologists | 39,677 participants | Korea | Training: 2,000 Test: 37,677 | D-CNN and I-CNN | CNN | AUC 0.9213, Sen 0.815, Spe 0.962 of D-CNN |
| 6 | Nafisah and Muhammad ( | Retrospective multi-center on CXR images | NA | 1,098 CXR images | United States, China, and Belarus | 5-fold cross validation | NA | CNN | AUC 0.999, Acc 98.7%, recall 98.3%, precision 98.3%, Spe 99.0% |
| 7 | Pasa et al. ( | Retrospective multi-center on CXR images | NA | 1,104 participants | United States, China, and Belarus | 5-fold cross validation | NA | CNN | AUC 0.925, Acc 86.2% |
| 8 | Rajaraman et al. ( | Retrospective multi-center on CXR images | Radiologists | 76,031 CXR images | United States and Spain | Training: test 9:1 | NA | CNN | AUC 0.9274–0.9491, recall 0.7736–0.8113, precision 0.9524–0.9773, Acc 0.8585–0.8962 |
| 9 | Rajpurkar et al. ( | Retrospective multi-center on CXR images | Culture or Xpert MTB/RIF | 677 participants | South Africa | Training: 563 Test: 114 | CheXaid | Deep learning | AUC 0.83, Sen 0.67, Spe 0.87, Acc 0.78 |
| 10 | Lee et al. ( | Retrospective multi-center on CXR images | Sputum microscopy, culture or PCR | 6,964 participants | Korea | Training: validation 7:3 Test: 455 | NA | CNN | AUC 0.82–0.84, Spe 26–48.5% at the cutoff of 95% Sen in the test set |
| 11 | Yan et al. ( | Retrospective multi-center on CT images | Culture | 1,248 CT images | China and United States | Training: validation 8:2 External test: 356 | NA | CNN | Acc 95.35–98.25%, recall 94.87–100%, precision 94.87–98.70% |
| 12 | Khan et al. ( | Prospective single-center on CXR images | Culture | 2,198 participants | Pakistan | Test: 2,198 | qXR and CAD4TB | CNN | AUC 0.92, Sen 0.93, Spe 0.75 for qXR; AUC 0.87, Sen 0.93, Spe 0.69 for CAD4TB |
| 13 | Qin et al. ( | Retrospective multi-center on CXR images | Xpert MTB/RIF | 1,196 participants | Nepal and Cameroon | Test: 1,196 | qXR, CAD4TB, and Lunit INSIGHT CXR | CNN | AUC 0.92–0.94, Sen 0.87–0.91, Spe 0.84–0.89, Acc 0.85–0.89 |
| 14 | Qin et al. ( | Retrospective multi-center on CXR images | Xpert MTB/RIF | 23,954 participants | Bangladesh | Test: 23,954 | qXR, CAD4TB, InferRead DR, etc. | CNN | AUC 84.89–90.81%, Sen 90.0–90.3%, Spe 61.1–74.3% when fixed at 90% Sen |
| 15 | Codlin et al. ( | Retrospective multi-center on CXR images | Xpert MTB/RIF | 1,032 participants | Viet Nam | Test: 1,032 | qXR, CAD4TB, Genki, etc. | CNN | AUC 0.50–0.82, Spe 6.3–48.7%, Acc 17.8–54.7% when fixed at 95.5% Sen |
| 16 | Melendez et al. ( | Retrospective single-center on CXR images | Culture | 392 patients | South Africa | 10-fold cross validation | CAD4TB | Machine learning | AUC 0.72–0.84, Spe 24–49%, NPV 95–98% when fixed at 95% Sen |
AI, artificial intelligence; TB, tuberculosis; CXR, chest X-ray; NA, not available; CNN, convolutional neural network; AUC, area under the curve; Sen, sensitivity; Spe, specificity; Acc, accuracy; PCR, polymerase chain reaction; AUAFROC, area under the alternative free-response receiver-operating characteristic curve; CT, computed tomography.
Summary of AI applications in discrimination between pulmonary tuberculosis and other lung diseases.
| No. | References | Method | Reference standard | Dataset | Study population | Discrimination | Training/Validation/test cohort | Model names | Algorithm | Results |
| 1 | Feng et al. ( | Retrospective multi-center on CT images | Histological diagnosis | 550 patients | China | PTB and lung cancer | Training:218 Internal validation:140 External validation: 192 | NA | DLN | AUC 0.809, Sen 0.908, Spe 0.608, Acc 0.828 in the external validation set |
| 2 | Zhuo et al. ( | Retrospective multi-center on CT images | Surgical pathology, specimen culture or assay | 313 patients | China | PTB and lung cancer | Training: validation 7:3 | NA | Radiomics nomogram | AUC 0.99, Sen 0.9841, Spe 0.9000, Acc 0.9570 in the validation set |
| 3 | Hu et al. ( | Retrospective multi-center on PET/CT images | Pathological or follow-up confirmation | 235 patients | China | PTB and lung cancer | Training: 163 Validation: 72 | NA | Radiomics nomogram | AUC 0.889, Sen 85%, Spe 78.12%, Acc 79.53% in the validation set |
| 4 | Du et al. ( | Retrospective single-center on PET/CT images | Pathology | 174 patients | China | PTB and lung cancer | Training: 122 Validation: 52 | NA | Radiomics nomogram | AUC 0.93, Sen 0.86, Spe 0.83, Acc 0.85 in the validation set |
| 5 | Wang et al. ( | Retrospective multi-center on CT images | Sputum acid-fast bacilli stain or culture | 1,185 patients | China | MTB-LD and NTM-LD | Training: validation: test 8:1:1 External test: 80 | NA | CNN | AUC 0.78, Sen 0.75, Spe 0.63, Acc 0.69 in the external test set |
| 6 | Yan et al. ( | Retrospective multi-center on CT images | Sputum culture or smear | 182 patients | China | MTB-LD and NTM-LD | Training: validation 8:2 External validation: 40 | NA | Radiomics | AUC 0.84—0.98, Sen 0.61–0.97, Spe 0.61–0.97 in the external validation set |
AI, artificial intelligence; CT, computed tomography; PTB, pulmonary tuberculosis; NA, not available; DLN, deep learning nomogram; AUC, area under the curve; Sen, sensitivity; Spe, specificity; Acc, accuracy; PET/CT: positron emission tomography/computed tomography; MTB-LD, Mycobacterium tuberculosis lung disease; NTM-LD, non-tuberculous mycobacterium lung disease; CNN, convolutional neural network.
Summary of AI applications in TB drug resistance identification.
| No. | References | Method | Reference standard | Dataset | Study sample | Resistance identification | Training/Validation/test cohort | Model names | Algorithm | Results |
| 1 | Jaeger et al. ( | Retrospective multi-center on CXR images | NA | 135 patients | Belarus | MDR-TB | 5-fold cross validation | NA | ANN, CNN and ML | AUC 50–66%, Acc 0.62–0.66 |
| 2 | Karki et al. ( | Retrospective multi-center on CXR images | DST | 5,642 CXR images | United States, China, etc. | DR-TB | 10-fold cross validation | NA | CNN | AUC 0.85 |
| 3 | Gao and Qian ( | Retrospective multi-center on CT images | NA | 230 patients | NA | MDR-TB | Training: 150 Validation: 35 Test: 45 | NA | CNN and ML | Acc 64.71–91.11% |
| 4 | Yang et al. ( | Retrospective multi-center on gene sequences | DST | 8,388 isolates | European, Asia, and Africa | 4 drugs and MDR-TB | Training: test 7:3 | DeepAMR | ML | AUC 94.4–98.7%, Sen 87.3–96.3%, Spe 90.9–96.7% |
| 5 | Yang et al. ( | Retrospective multi-center on gene sequences | DST | 13,402 isolates | NA | 4 drugs | Training: validation: test 4:2:2 or stratified cross validation | HGAT-AMR | GNN | AUC 72.83–99.10%, Sen 50.65–96.60%, Spe 79.50–98.87% |
| 6 | Yang et al. ( | Retrospective multi-center on gene sequences | DST | 1,839 isolates | United Kingdom | 8 drugs and MDR-TB | Cross-validation | NA | ML | AUC 91–100%, Sen 84–97%, Spe 90–98% |
| 7 | Deelder et al. ( | Retrospective multi-center on gene sequences | DST | 16,688 isolates | NA | 14 drugs and MDR-TB | 5-fold cross validation | NA | ML | Acc 73.4–97.5%, Sen 0–92.8%, Spe 75.6–100% |
| 8 | Chen et al. ( | Retrospective multi-center on gene sequences | DST | 4,393 isolates | ReSeqTB Knowledgebase | 10 drugs | 10-fold cross validation Independent validation: 792 | NA | WDNN and ML | AUC 0.937, Sen 87.9%, Spe 92.7% for the first-line drugs |
| 9 | Gröschel et al. ( | Retrospective multi-center on gene sequences | DST | 20,408 isolates | NCBI Nucleotide Database | 10 drugs | Training: validation 3:1 | GenTB | WDNN and ML | AUC 0.73–0.96, Sen 57–93%, Spe 78–100% |
| 10 | Kuang et al. ( | Retrospective multi-center on gene sequences | DST | 10,575 isolates | China, Cameroon, Uganda, etc. | 8 drugs | 10-fold cross validation | NA | CNN and ML | Acc 89.2–99.2%, Sen 93.4–100%, Spe 48.0–91.7%, F1 score 93.3–99.6% |
| 11 | Jiang et al. ( | Retrospective multi-center on gene sequences | DST | 12,378 isolates | NCBI-SRA Database | 4 drugs | Training: validation: test 8:1:1 and 10-fold cross validation | HANN | Attentive neural network | AUC 93.66–99.05%, Sen 67.12–96.31%, Spe 92.52–98.84% |
AI, artificial intelligence; TB, tuberculosis; CXR, chest X-ray; NA, not available; MDR-TB, multi-drug resistant tuberculosis; ANN, artificial neural network; CNN, convolutional neural network; ML, machine learning; AUC, area under the curve; Acc, accuracy; DST, drug susceptibility testing; DR-TB, drug-resistant tuberculosis; CT, computed tomography; Sen, sensitivity; Spe, specificity; GNN, graph neural network; WDNN, wide and deep neural network; SRA, sequence read archive.