| Literature DB >> 35448995 |
Yimin Wang1, Yicong Li2,3, Yi Gao4, Jinping Zheng5, Nanshan Zhong6, Wenya Chen1, Changzheng Zhang3, Lijuan Liang1, Ruibo Huang1, Jianling Liang1, Dandan Tu3.
Abstract
BACKGROUND: Spirometry quality assurance is a challenging task across levels of healthcare tiers, especially in primary care. Deep learning may serve as a support tool for enhancing spirometry quality. We aimed to develop a high accuracy and sensitive deep learning-based model aiming at assisting high-quality spirometry assurance.Entities:
Keywords: Artificial intelligence; Deep learning; General practitioner; Quality control; Spirometry
Mesh:
Year: 2022 PMID: 35448995 PMCID: PMC9028127 DOI: 10.1186/s12931-022-02014-9
Source DB: PubMed Journal: Respir Res ISSN: 1465-9921
Fig. 1Input and output samples from Object Detection Module. a Suspect a cough in the flow–volume graph, an up-and-down flow spike is detected by the module; b Suspect obstructed mouthpiece or spirometer in the flow–volume graph, a flutter is detected by the module; c Suspect glottis closure in the flow–volume graph, a sharp drop is detected by the module
Fig. 2Flowchart of the algorithm training and testing data acquisition, selection, and division. 16,502 spirometry files were retrieved from the First Affiliated Hospital of Guangzhou Medical University. After exclusion of files with no curves or curves that could not be resolved, 15,693 files remained, files were randomly divided into training and internal testing sets. Additional 219 spirometry files retrieved from the multicenter (three hospitals) of the National Clinical Research Center for Respiratory Disease were used for external testing. FEV forced expiratory volume in 1 s, FVC forced vital capacity
Fig. 3Flowchart of data acquisition from ten primary care units. 171, 431, and 840 curves from 72, 148, and 281 files performed during month 0, month 1, and month 2 in ten primary care units by 30 GPs, respectively. GPs general practitioners; AI artificial intelligence
Acceptability and usability assessment in the internal and external test set (n = 4592 and 360 curves, respectively)
| Task | Balanced accuracy (%) | Sensitivity (%) | Specificity (%) | PPV (%) | NPV (%) |
|---|---|---|---|---|---|
| FEV1 Acceptability | 95.1 | 97.8 | 92.4 | 99.6 | 69.6 |
| FEV1 Usability | 92.4 | 99.4 | 85.4 | 99.7 | 72.2 |
| FVC Acceptability | 93.6 | 97.5 | 89.6 | 98.9 | 79.4 |
| FVC Usability | 94.3 | 99.5 | 89.0 | 99.8 | 74.7 |
| FEV1 Acceptability | 97.7 | 99.6 | 95.8 | 98.0 | 99.1 |
| FEV1 Usability | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
| FVC Acceptability | 95.4 | 99.6 | 91.3 | 94.9 | 99.2 |
| FVC Usability | 100.0 | 100.0 | 100.0 | 100.0 | 100.0 |
FEV forced expiratory volume in 1 s, FVC forced vital capacity, PPV positive predictive value, NPV negative predictive value
Rating assessment in the internal and external test set (n = 1569 and 182 files, respectively)
| Task | Accuracy (%) |
|---|---|
| FEV1 quality rating | 94.3 |
| FVC quality rating | 92.2 |
| FEV1 quality rating | 95.6 |
| FVC quality rating | 92.3 |
See Table 1 legends for abbreviations
FEV1 and FVC quality assessment in primary care units
| GPs with regular practice | GPs with AI-assistance | |||
|---|---|---|---|---|
| Task, n (%) | Month 0 | Month 1 | Month 2 | |
| FEV1 maneuvers | 140 (81.9%) | 359 (83.3%) | 771 (91.8%) | < .0001 |
| FVC maneuvers | 120 (70.2%) | 343 (79.6%) | 751 (89.4%) | < .0001 |
| FEV1 maneuvers | 151 (88.3%) | 396 (91.9%) | 833 (99.2%) | < .0001 |
| FVC maneuvers | 152 (88.9%) | 398 (92.3%) | 833 (99.2%) | < .0001 |
| FEV1 tests | 51 (70.8%) | 117 (79.1%) | 258 (91.8%) | < .0001 |
| FVC tests | 38 (52.8%) | 107 (72.3%) | 250 (89.0%) | < .0001 |
Data are presented as absolute numbers in the case of frequencies. GPs general practitioners, AI artificial intelligence; see Table 1 legends for expansion of abbreviations