| Literature DB >> 20830194 |
Direk Limmathurotsakul1, Kris Jamsen, Arkhom Arayawichanont, Julie A Simpson, Lisa J White, Sue J Lee, Vanaporn Wuthiekanun, Narisara Chantratita, Allen Cheng, Nicholas P J Day, Claudio Verzilli, Sharon J Peacock.
Abstract
BACKGROUND: Culture remains the diagnostic gold standard for many bacterial infections, and the method against which other tests are often evaluated. Specificity of culture is 100% if the pathogenic organism is not found in healthy subjects, but the sensitivity of culture is more difficult to determine and may be low. Here, we apply Bayesian latent class models (LCMs) to data from patients with a single Gram-negative bacterial infection and define the true sensitivity of culture together with the impact of misclassification by culture on the reported accuracy of alternative diagnostic tests. METHODS/PRINCIPALEntities:
Mesh:
Year: 2010 PMID: 20830194 PMCID: PMC2932979 DOI: 10.1371/journal.pone.0012485
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Criteria used to determine the possibility of having melioidosis.
| Definite melioidosis (Culture confirmed) | One or more clinical samples culture positive for |
| Probable melioidosis (Clinical melioidosis) | Presence of multiple liver abscesses and/or single or multiple splenic abscess(es) on abdominal ultrasound with an appearance that is characteristic for melioidosis (swiss cheese appearance or small dispersed abscesses), but culture not performed or negative for |
| Possible melioidosis (Findings that fall short of ‘probable’ but are not ‘unlikely’) | Clinically suspected melioidosis and improved after treatment with an effective antimicrobial regimen for melioidosis (ceftazidime/carbapenem drug/amoxicillin-clavulanate), |
| Not melioidosis (Melioidosis is unlikely) | Definite alternative diagnosis for manifestations leading to suspected melioidosis, |
Prevalence, sensitivities and specificities, positive and negative predictive values (PPV's and NPV's) using culture as gold standard and for two Bayesian latent class models.
| Parameters | Culture as gold standard | Model 0 | Final model |
| Prevalence | 37.2 (31.9–42.7) | 61.0 (54.7–67.2) | 61.6 (54.4–69.2) |
| Culture | |||
| Sensitivity | 100 | 60.9 (53.3–68.6) | 60.2 (51.7–68.5) |
| Specificity | 100 | 100 | 100 |
| PPV | 100 | 100 | 100 |
| NPV | 100 | 62.1 (53.5–70.5) | 61.9 (50.0–70.9) |
| IHA | |||
| Sensitivity | 71.4 (63.2–79.7) | 73.0 (66.1–79.4) | 69.9 (63.6–76.0) |
| Specificity | 63.7 (57.0–70.4) | 87.7 (80.0–93.9) | 83.9 (74.9–91.4) |
| PPV | 53.8 (45.9–61.7) | 90.3 (83.7–95.3) | 87.5 (79.4–93.9) |
| NPV | 79.0 (72.7–85.4) | 67.4 (58.8–75.5) | 63.4 (52.7–72.5) |
| IgM ICT | |||
| Sensitivity | 81.5 (74.4–88.6) | 80.4 (74.1–85.8) | 77.5 (71.4–83.1) |
| Specificity | 48.8 (41.8–55.7) | 65.5 (56.3–74.5) | 62.0 (52.0–72.1) |
| PPV | 48.5 (41.5–55.5) | 78.5 (71.2–84.9) | 76.7 (68.5–84.2) |
| NPV | 81.7 (74.6–88.7) | 68.0 (58.4–76.9) | 63.2 (51.0–73.4) |
| IgG ICT | |||
| Sensitivity | 87.4 (81.3–93.4) | 91.1 (86.3–94.7) | 88.0 (82.4–92.4) |
| Specificity | 49.3 (42.3–56.2) | 77.5 (67.8–86.4) | 74.1 (63.2–85.2) |
| PPV | 50.5 (43.6–57.4) | 86.4 (79.3–92.3) | 84.5 (76.1–92.2) |
| NPV | 86.8 (80.5–93.1) | 84.8 (76.8–91.0) | 79.4 (68.4–87.3) |
| ELISA | |||
| Sensitivity | 82.4 (75.4–89.3) | 77.1 (69.9–83.8) | 75.6 (67.9–82.8) |
| Specificity | 73.1 (67.0–79.3) | 99.4 (94.5–100) | 97.9 (92.4–99.9) |
| PPV | 64.5 (56.8–72.2) | 99.5 (95.4–100) | 98.3 (93.7–99.9) |
| NPV | 87.5 (82.4–92.6) | 73.5 (64.3–81.6) | 71.3 (59.3–81.3) |
Model 0 assumed that for a given patient, each diagnostic test was not correlated. The final model (Model 3) assumed that in infected patients, all serological tests were correlated.
Values shown are mean estimates with 95% confidence interval.
Values shown are median estimates with 95% credible interval.
Prevalence estimated by the models are for the test population as a whole, since they cannot determine whether a given patient was infected or not infected.
Figure 1Assessing the fitness of model 0 (conditional independence model) (A) and the final model (conditional dependence model) (B) using probability analysis (posterior predictive distribution).
Dataset was replicated for 20,000 times per model to assess the probability that the actual dataset was being observed, if that model was true. Running model 0 a total of 20,000 times (Figure 1A), we found that only 298 replicate datasets had at least 69 patients with all five tests positive and giving the profile ‘11111’ (69 was the number of patients having this profile in the actual dataset) (298/20,000, Bayesian p value 0.015). This indicated that model 0 was not a good fit for the observed data. Running the final model a total of 20,000 times (Figure 1B), we found that 4,752 replicate datasets had at least 69 patients with the profile ‘11111’ (4,752/20,000, Bayesian p value 0.24), indicating that the final model fit the observed data well.