| Literature DB >> 23652843 |
Catriona J Waitt1, Elizabeth C Joekes, Natasha Jesudason, Peter I Waitt, Patrick Goodson, Ganizani Likumbo, Samuel Kampondeni, E Brian Faragher, S Bertel Squire.
Abstract
OBJECTIVES: In low-resource settings, limitations in diagnostic accuracy of chest X-rays (CXR) for pulmonary tuberculosis (PTB) relate partly to non-expert interpretation. We piloted a TB CXR Image Reference Set (TIRS) to improve non-expert performance in an operational setting in Malawi.Entities:
Mesh:
Year: 2013 PMID: 23652843 PMCID: PMC3738845 DOI: 10.1007/s00330-013-2840-z
Source DB: PubMed Journal: Eur Radiol ISSN: 0938-7994 Impact factor: 5.315
Fig. 1Three examples of chest X-rays from the Tuberculosis CXR Image Reference Set (TIRS) booklet. Multiple presentations of active TB were included (a, b), as well as several common pitfalls in TB diagnosis, for example septic emboli (c)
Agreement between decision to treat and culture gold standard at baseline and with TIRS—all films
| Rater grade | Mean (SD) | Difference (95 % confidence interval)a [ | ||||
|---|---|---|---|---|---|---|
| Baseline | With TIRS | |||||
| COs ( | Agreement (%) | 63.8 (7.4) | 65.7 (5.8) | 1.9 | (−2.5 to 6.4) | [0.352] |
| Kappa | 0.210 (0.159) | 0.191 (0.100) | −0.019 | (−0.102 to 0.064) | [0.622] | |
| DRs ( | Agreement | 60.7 (7.9) | 67.1 (8.0) | 6.4 | (−0.2 to 12.9) | [0.054] |
| Kappa | 0.141 (0.102) | 0.276 (0.141) | 0.135 | (−0.016 to 0.286) | [0.073] | |
| COs + DRs ( | Agreement | 62.5 (7.6) | 66.3 (6.6) | 3.8 | (0.3 to 7.3) | [0.035] |
| Kappa | 0.181 (0.139) | 0.227 (0.123) | 0.046 | (−0.034 to 0.125) | [0.241] | |
| Radiologistsb ( | Agreement | 67.8 (8.9) | – | |||
| Kappa | 0.347 (0.040) | – | ||||
COs clinical officers DRs doctors
a95 % confidence interval and P values adjusted for clustering within raters
bOnly baseline readings done by radiologists
Agreement between decision to treat and culture gold standard at baseline and with TIRS—smear-negative, culture-positive subset of 17 films
| Rater grade | Mean (SD) | Difference (95 % confidence interval)a [ | ||||
|---|---|---|---|---|---|---|
| Baseline | With TIRS | |||||
| COs ( | Agreement (%) | 60.8 (9.0) | 56.0 (5.5) | −4.8 | (−10.7 to 1.1) | [0.099] |
| Kappa | 0.228 (0.159) | 0.156 (0.083) | −0.072 | (−0.184 to 0.040) | [0.183] | |
| DRs ( | Agreement | 56.6 (6.9) | 58.9 (8.3) | 2.3 | (−8.4 to 13.0) | [0.623] |
| Kappa | 0.154 (0.135) | 0.191 (0.163) | 0.037 | (−0.160 to 0.234) | [0.672] | |
| COs + DRs ( | Agreement | 59.0 (8.2) | 57.2 (6.8) | −1.8 | (−7.0 to 3.5) | [0.482] |
| Kappa | 0.197 (0.150) | 0.171 (0.120) | −0.026 | (−0.123 to 0.070) | [0.578] | |
| Radiologistsb ( | Agreement | 60.5 (2.7) | – | |||
| Kappa | 0.238 (0.039) | – | ||||
COs clinical officers DRs doctors
a95 % confidence interval and P values adjusted for clustering within raters
bOnly baseline readings done by radiologists
Fig. 2Agreement with culture gold standard, sensitivity and specificity at baseline and with TIRS—all films
Fig. 3Agreement with culture gold standard, sensitivity and specificity at baseline and with TIRS—smear-negative, culture-positive subset of 17 films
Agreement with culture gold standard, sensitivity, specificity and agreement corrected for chance for two clinical officers (COA and COB) attending the CXR Reading and Recording System (CRRS)
| Rater | Baseline | With TIRS | After CRRS (with CRRS proforma) | After CRRS (with question to treat) | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Agree (%) | Sens. (%) | Spec. (%) | κ | Agree (%) | Sens. (%) | Spec. (%) | κ | Agree (%) | Sens. (%) | Spec. (%) | κ | Agree (%) | Sens. (%) | Spec. (%) | κ | |
| COA | 57.6 | 60.0 | 52.6 | 0.115 | 59.3 | 65.0 | 47.4 | 0.117 | 69.5 | 72.5 | 63.2 | 0.338 | 66.1 | 75.0 | 47.4 | 0.224 |
| COB | 59.3 | 62.5 | 52.6 | 0.140 | 66.1 | 72.5 | 52.6 | 0.245 | 72.9 | 77.5 | 63.2 | 0.396 | 69.5 | 84.6 | 42.1 | 0.287 |
| Mean | 58.5 | 61.3 | 52.6 | 0.128 | 62.7 | 68.8 | 50.0 | 0.181 | 71.2 | 75.0 | 63.2 | 0.367 | 67.8 | 79.8 | 44.8 | 0.256 |
Previous studies on accuracy of chest X-ray (CXR) interpretation in tuberculosis
| Author | Year | Setting | Film setsa | Sensitivity % (95 %CI) | Specificity % (95 %CI) | Inter-observer kappa (95 % CI) | Readersb | CRRS | Reference standard |
|---|---|---|---|---|---|---|---|---|---|
| Den Boon [ | 2005 | South Africa | Prevalence study | 0.69 (0.64–0.74) for PTB 0.47 (0.42–0.53) for normal | Expert (1) | √ | Single expert | ||
| Agizew [ | 2010 | Botswana | Screening in HIV | “no disagreement” | Expert (2) | √ | Culture/follow-upc | ||
| Dawson [ | 2010 | South Africa | Screening in HIV | 68 (54–79) | 53 (45–61) | 0.61 (0.40–0.83) | Expert (2) | √ | Culture |
| Cain [ | 2010 | SE Asia | Screening in HIV | 65 | 85 | Expert (1) | Culture | ||
| Davis [ | 2010 | Uganda | HIV positive suspects | 78 (66–87) | 22 (16–30) | Expert (2) | √ | Culture/follow-up | |
| van Cleeff [ | 2005 | Kenya | HIV negative suspects | 80 (74–85) | 67 (62–71) | 0.75 (SE 0.037) | Expert (2) | Culture | |
| Nyirenda [ | 1999 | Malawi | HIV negative suspects | 65–71 | 71–79 | Non-expert (194) | Expert reference panel | ||
| Balabanova [ | 2005 | Russia | Population screening | 0.387 (0.382–0.391) | Expert (101) | Expert reference panel | |||
| Kumar [ | 2005 | Nepal | HIV negative suspects | 60 | 72 | 0.17 (0–0.38) | Non-expert (21) | Expert reference panel | |
| Zellweger [ | 2006 | Switzerland | Immigrant screening | 0.846 (SE 0.029) | Expert (2) | None | |||
| 0.557 (SE 0.109) | Expert (2) and non-expert (1) |
aPopulation from which the film sets were obtained
bExperts include pulmonologists, pulmonary tuberculosis (PTB) specialists and radiologists. Non-experts include all other levels of CXR readers
cPTB diagnosis was presumptive in 55 %