| Literature DB >> 16646956 |
Padmanabhan Ramnarayan1, Graham C Roberts, Michael Coren, Vasantha Nanduri, Amanda Tomlinson, Paul M Taylor, Jeremy C Wyatt, Joseph F Britto.
Abstract
BACKGROUND: Computerized decision support systems (DSS) have mainly focused on improving clinicians' diagnostic accuracy in unusual and challenging cases. However, since diagnostic omission errors may predominantly result from incomplete workup in routine clinical practice, the provision of appropriate patient- and context-specific reminders may result in greater impact on patient safety. In this experimental study, a mix of easy and difficult simulated cases were used to assess the impact of a novel diagnostic reminder system (ISABEL) on the quality of clinical decisions made by various grades of clinicians during acute assessment.Entities:
Mesh:
Year: 2006 PMID: 16646956 PMCID: PMC1513379 DOI: 10.1186/1472-6947-6-22
Source DB: PubMed Journal: BMC Med Inform Decis Mak ISSN: 1472-6947 Impact factor: 2.796
Figure 1Screenshot of ISABEL simulated study procedure – step 1. This figure shows how one subject was presented with the text of a case simulation, how he could search ISABEL by using summary clinical features, and record his clinical decisions prior to viewing ISABEL's results. For this case, clinically important diagnoses provided by the expert panel are: nasopharyngitis (OR viral upper respiratory tract infection) and meningitis/encephalitis. This subject has committed a DEO (failed to include both clinically important diagnoses in his diagnostic workup).
Figure 2Screenshot of ISABEL simulated study procedure – step 2. This figure shows how the same subject was provided the results of ISABEL's search in one click, and how he was provided the opportunity to modify the clinical decisions made in step 1. It was not possible to go back from this step to step 1 to modify the clinical decisions made earlier. Notice that the subject has not identified meningitis/encephalitis from the ISABEL suggestions as clinically important. He has made no changes to his workup, and has committed a DEO despite system advice.
Figure 3Example of one simulated case used in study*, the variability in clinical features as abstracted by five different users (verbatim), and clinically important diagnoses as judged by panel.
Figure 4Schematic of scoring procedure.
Study participants, cases and case episodes*
| Consultant (%) | Registrar (%) | SHO (%) | Student (%) | ||||
| Subjects invited to participate | 27 (27.8) | 33 (34) | 20 (20.6) | 17 (17.5) | |||
| Subjects who attempted at least one case (attempters) | 18 (23.7) | 24 (31.6) | 19 (25) | 15 (19.7) | |||
| Subjects who attempted at least six cases | 16 (25.8) | 18 (29) | 15 (24.2) | 13 (20.9) | |||
| Subjects who completed all 12 cases (completers) | 15 (28.8) | 14 (26.9) | 10 (19.2) | 13 (25) | |||
| Regular DSS users (usage > once/week) | 2 | 1 | 3 | 0 | |||
* Since each case was assessed more than once, each attempt by a subject at a case was termed as a 'case episode'.
Mean count of diagnostic errors of omission (DEO) pre-ISABEL and post-ISABEL consultation
| Grade of subject | DEO pre-ISABEL (SD) | DEO post-ISABEL (SD) | Reduction (SD) |
| Consultant | 5.13 (1.3) | 4.6 (1.4) | 0.53 (0.7) |
| Registrar | 5.64 (1.5) | 5.14 (1.6) | 0.5 (0.5) |
| SHO | 4.4 (1.6) | 4.1 (1.6) | 0.3 (0.5) |
| Medical student | 6.61 (1.3) | 5.92 (1.4) | 0.69 (0.7) |
*Total number of subjects who completed all 12 assigned cases
Mean DEO count analyzed by level of case and subject grade
| Pre-DSS | Post-DSS | Pre-DSS | Post-DSS | |
| Consultant | 1.66 | 1.47 | 2.0 | 1.87 |
| Registrar | 2.21 | 1.93 | 2.14 | 1.92 |
| SHO | 1.3 | 1.2 | 2.0 | 1.8 |
| Medical student | 2.92 | 2.54 | 2.54 | 2.30 |
Mean quality scores for diagnoses broken down by grade of subject
| Mean pre-ISABEL score | Mean post-ISABEL score | Mean score change* | |
| Consultant | 0.39 | 0.43 | 0.044 |
| Registrar | 0.40 | 0.44 | 0.038 |
| SHO | 0.45 | 0.48 | 0.032 |
| Medical student | 0.31 | 0.37 | 0.059 |
* There was no significant difference between grades in terms of change in diagnosis quality score (one-way ANOVA p > 0.05)
Increase in the average number of diagnoses and irrelevant diagnoses before and after DSS advice, broken down by grade
| Pre-DSS | Post-DSS | Increase | Pre-DSS | Post-DSS | Increase | |
| Consultant | 3.3 | 4.6 | 1.3 | 0.4 | 0.7 | 0.3 |
| Registrar | 4.3 | 5.9 | 1.6 | 0.8 | 1.3 | 0.5 |
| SHO | 4.4 | 6.1 | 1.7 | 0.6 | 1.4 | 0.8 |
| Medical student | 4.0 | 6.6 | 2.6 | 1.1 | 2.2 | 1.1 |
Number of case episodes in which clinically 'important' decisions were prompted by ISABEL consultation
| Number of 'important' decisions prompted by ISABEL | Diagnoses | Tests | Treatment steps |
| 69 | 56 | 42 | |
| 19 | 12 | 5 | |
| 2 | 2 | 2 | |
| 3 | 0 | 0 | |
| 1 | 0 | 0 | |
| 657 | 637 | 678 | |
| No. of case episodes in which at least ONE 'significant' decision was prompted by ISABEL | 94 (12.5%) | 70 (9.3%) | 49 (6.5%) |
| Total number of individual 'significant' decisions prompted by ISABEL | 130 | 86 | 58 |
Time taken to process case simulations broken down by grade of subject
| Consultant | 5 min 5 sec | 42 sec |
| Registrar | 5 min 45 sec | 57 sec |
| SHO | 5 min 54 sec | 53 sec |
| Medical student | 8 min 36 sec | 3 min 42 sec |
| Overall | 6 min 2 sec (IQR: 4:03 – 9:47) | 1 min (IQR: 30 sec – 2:04) |