| Literature DB >> 25197271 |
G Kaguthi1, V Nduba2, J Nyokabi3, F Onchiri4, R Gie5, M Borgdorff6.
Abstract
The chest radiograph (CXR) is considered a key diagnostic tool for pediatric tuberculosis (TB) in clinical management and endpoint determination in TB vaccine trials. We set out to compare interrater agreement for TB diagnosis in western Kenya. A pediatric pulmonologist and radiologist (experts), a medical officer (M.O), and four clinical officers (C.Os) with basic training in pediatric CXR reading blindly assessed CXRs of infants who were TB suspects in a cohort study. C.Os had access to clinical findings for patient management. Weighted kappa scores summarized interrater agreement on lymphadenopathy and abnormalities consistent with TB. Sensitivity and specificity of raters were determined using microbiologically confirmed TB as the gold standard (n = 8). A total of 691 radiographs were reviewed. Agreement on abnormalities consistent with TB was poor; k = 0.14 (95% CI: 0.10-0.18) and on lymphadenopathy moderate k = 0.26 (95% CI: 0.18-0.36). M.O [75% (95% CI: 34.9%-96.8%)] and C.Os [63% (95% CI: 24.5%-91.5%)] had high sensitivity for culture confirmed TB. TB vaccine trials utilizing expert agreement on CXR as a nonmicrobiologically confirmed endpoint will have reduced specificity and will underestimate vaccine efficacy. C.Os detected many of the bacteriologically confirmed cases; however, this must be interpreted cautiously as they were unblinded to clinical features.Entities:
Year: 2014 PMID: 25197271 PMCID: PMC4150539 DOI: 10.1155/2014/291841
Source DB: PubMed Journal: Interdiscip Perspect Infect Dis ISSN: 1687-708X
Figure 1Profile of TB suspects and chest radiographs read per rater.
Overall agreement amongst all raters on lymphadenopathy and quality of CXR.
| Rating | Kappa (95% CI) |
|
|---|---|---|
| Lymphadenopathy | ||
| Present | 0.27 (0.129–0.448) | <0.001 |
| Absent | 0.29 (0.163–0.430) | <0.001 |
| Equivocal | 0.17 (0.044–0.423) | <0.001 |
| Overall weighted kappa | 0.26 (0.182–0.355) | <0.001 |
| Quality of radiographs | ||
| Optimal | −0.1324 | 0.999 |
| Suboptimal | −0.1674 | 0.999 |
| Unreadable | −0.0184 | 0.8822 |
| Overall weighted kappa | −0.1452 | 0.998 |
Agreement on any abnormality on chest radiograph [(+) abnormal/(−) normal].
|
| ||||||
|---|---|---|---|---|---|---|
| Reader | −/− | −/+ | +/+ | +/− | Kappa | McNemar's test |
| Radiologist versus pulmonologist | 515 | 62 | 43 | 71 | 0.28 (0.19–0.37) | 0.44 |
| Pulmonologist versus M.O | 445 | 132 | 59 | 55 | 0.23 (0.15–0.31) | <0.0001 |
| Radiologist versus M.O | 450 | 136 | 55 | 50 | 0.22 (0.14–0.30) | <0.0001 |
| Pulmonologist versus C.O | 477 | 100 | 31 | 83 | 0.10 (0.01–0.18) | 0.21 |
| Radiologist versus C.O | 493 | 93 | 38 | 67 | 0.18 (0.10–0.27) | 0.04 |
| C.O versus M.O | 417 | 143 | 48 | 83 | 0.10 (0.02–0.17) | <0.0001 |
Interrater agreement on quality of chest radiographs.
|
| ||||||
|---|---|---|---|---|---|---|
| Reader | Optimal-optimal | Optimal-suboptimal | Suboptimal-suboptimal | Suboptimal-optimal | Kappa | McNemar test |
| Pulmonologist versus radiologist | 29 | 295 | 301 | 7 | 0.07 (0.03–0.10) | <0.0001 |
| M.O versus pulmonologist | 10 | 37 | 271 | 314 | −0.09 (−0.13–−0.05) | <0.0001 |
| M.O versus radiologist | 2 | 45 | 551 | 34 | −0.02 (−0.09–0.05) | 0.22 |
| C.O versus pulmonologist | 7 | 5 | 303 | 317 | 0.005 (−0.02–0.03) | <0.0001 |
| C.O versus radiologist | 2 | 12 | 586 | 34 | 0.06 (−0.05–0.16) | 0.0003 |
| C.O versus M.O | 0 | 12 | 573 | 47 | −0.03 (−0.05–−0.02) | <0.0001 |
Sensitivity and specificity: culture confirmed TB versus abnormal likely TB diagnosis on radiograph.
| Reader/rater | Culture results | Sensitivity (95% CI) | Specificity (95% CI) | Positive predictive values∗ | Negative predictive values∗ | |
|---|---|---|---|---|---|---|
| +ve ( | −ve ( | |||||
| Clinician | ||||||
| Positive | 5 | 126 | 62.5% (24.5%–91.5%) | 81.6% (78.4%–84.4%) | 3.8% (1.41%–9.13%) | 99.5% (98.3%–99.9%) |
| Negative | 3 | 557 | ||||
| Medical officer | ||||||
| Positive | 6 | 185 | 75.0% (34.9%–96.8%) | 72.9% (69.4%–76.2%) | 3.14% (1.28%–7.03%) | 99.6% (98.4%–99.9%) |
| Negative | 2 | 498 | ||||
| Radiologist | 3.96% (1.23%–10.0%) | 99.3% (98.1–99.8%) | ||||
| Positive | 4 | 101 | 50.0% (15.7%–84.3%) | 85.2% (82.3%–87.8%) | ||
| Negative | 4 | 582 | ||||
| Pulmonologist | ||||||
| Negative | 4 | 110 | 50.0% (15.7%–84.3%) | 83.9% (80.9%–86.6%) | 3.5% (1.13%–9.27%) | 99.5% (98.1%–99.8%) |
| Positive | 4 | 573 | ||||
∗Disease incidence 1.12% (0.54–2.36).
Figure 2Venn diagram showing radiographs of probable TB cases reviewed by all raters (n = 28) and classified as “abnormal likely TB.”
| Clinician | Medical officer | Radiologist | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| Normal | Abnormal likely TB | Abnormal unlikely TB | Total | ||||||||
| Pulmonologist | |||||||||||
| Normal | Abnormal likely TB | Abnormal unlikely TB | Normal | Abnormal likely TB | Abnormal unlikely TB | Normal | Abnormal likely TB | Abnormal unlikely TB | |||
| Normal | Normal | 345 | 5 | 30 | 12 | 0 | 3 | 18 | 0 | 4 | 417 |
| Abnormal likely TB | 5 | 1 | 3 | 1 | 0 | 1 | 0 | 0 | 2 | 13 | |
| Abnormal unlikely TB | 83 | 1 | 20 | 3 | 1 | 5 | 10 | 1 | 6 | 130 | |
|
| |||||||||||
| Abnormal likely TB | Normal | 12 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 15 |
| Abnormal likely TB | 1 | 0 | 0 | 1 | 3 | 0 | 0 | 1 | 0 | 6 | |
| Abnormal unlikely TB | 4 | 1 | 0 | 1 | 1 | 1 | 2 | 0 | 0 | 10 | |
|
| |||||||||||
| Normal | 51 | 1 | 4 | 3 | 0 | 2 | 4 | 2 | 1 | 68 | |
| Abnormal unlikely TB | Abnormal likely TB | 4 | 1 | 0 | 1 | 1 | 2 | 0 | 0 | 0 | 9 |
| Abnormal unlikely | 10 | 0 | 2 | 2 | 1 | 1 | 4 | 2 | 1 | 23 | |
|
| |||||||||||
| Total | 515 | 11 | 60 | 24 | 7 | 16 | 38 | 6 | 14 | 691 | |
| Outcome | Kappa (95% CI) |
|
|---|---|---|
| TB classification category | ||
| Abnormal | 0.177 (0.124–0.237) | <0.001 |
| Abnormal likely TB | 0.193 (0.095–0.330) | <0.001 |
| Abnormal unlikely TB | 0.065 (0.026–0.125) | <0.001 |
|
| ||
| Generalized/weighted kappa | 0.136 (95% CI: 0.100–0.176 ) | <0.001 |