| Literature DB >> 33980661 |
Jason Robert Vanstone1, Shivani Patel2, Michelle L Degelman2, Ibrahim W Abubakari3, Shawn McCann4, Robert Parker2, Terry Ross5.
Abstract
BACKGROUND: Unnecessary testing is a problem-facing healthcare systems around the world striving to achieve sustainable care. Despite knowing this problem exists, clinicians continue to order tests that do not contribute to patient care. Using behavioural and implementation science can help address this problem. Locally, audit and feedback are used to provide information to clinicians about their performance on relevant metrics. However, this is often done without evidence-based methods to optimise uptake. Our objective was to improve the appropriate use of laboratory tests in the ED using evidence-based audit and feedback and behaviour change techniques.Entities:
Keywords: audit; emergency department; quality improvement
Mesh:
Year: 2021 PMID: 33980661 PMCID: PMC9132872 DOI: 10.1136/emermed-2020-210009
Source DB: PubMed Journal: Emerg Med J ISSN: 1472-0205 Impact factor: 3.814
Figure 1Timeline of events in the development and implementation of the ED clinician report. The timeline of events is shown from initial discussions and development of the clinician report through the last audit and feedback session at the end of data collection for this manuscript. There were six audit and feedback sessions for the duration of this project, including the initial discussion and presentation of the clinician report in April 2018. The last audit and feedback session was held in October 2019 during which data were presented from January 2017 through September 2019, allowing for approximately 18 months of postintervention data to be reviewed and compared with 14 months of preintervention data. Graphic created using Time Graphics (https://time.graphics/).
Figure 2Example screenshot of the ED clinician report. An example time series showing the monthly number of urine drug screen tests ordered per 1000 patient visits for the entire physician group (ie, department-level data). The clinician report was designed and presented in MicroStrategy Desktop (V.11.1) allowing physicians to interact with their data in a more meaningful manner than with static PDF reports. Filters can be used to view data from different timelines, individuals or groups of physicians, different hospital sites, different tests and so on. Tool tips were created to provide more information on individual data points by simply using the mouse to hover over the data point.
Comparison of the average monthly number of urine drug screen tests ordered, cost, and patient visits preintervention (January 2017–March 2018) and postintervention (April 2018–September 2019)
| Monthly averages: | Preintervention (SD) | Postintervention (SD) | P value* |
| Number of tests ordered | 159 (±17.5) | 52 (±24.5) | <0.0001 |
| Patient visits | 6247 (±414.9) | 6340 (±327.0) | =0.4785 |
| Tests/1000 patient visits | 26 (±3.6) | 8 (±3.9) | <0.0001 |
| Cost of tests/1000 patient visits† | $2465 (±356.6) | $790 (±381.6) | <0.0001 |
*P values for t-test. p<0.05 is considered significant. P values calculated using GraphPad QuickCalcs (https://www.graphpad.com/quickcalcs/).
†Cost in Canadian dollars (CAD). Individual tests cost $96.10.
Figure 3Reduction in urine drug screen test orders following audit and feedback intervention. The control chart displays the drop in the monthly number of urine drug screen test orders (per 1000 patient visits) following the initiation of the audit and feedback intervention in April 2018 (black dashed line). Red dashed lines indicate the upper (UCL) and lower (LCL) three-σ control limits. Highlighted months indicate audit and feedback sessions.
Figure 4Enhanced decrease in urine drug screen test use following an audit and feedback intervention. The red line indicates the number of urine drug screen tests per 1000 patient visits during the preintervention period (preintervention data for this analysis were extended to January 2015 to allow for better modelling with more data). The linear regression model is shown in orange. The dark blue line shows the number of urine drug screen tests per 1000 patient visits during the post-intervention period. The linear regression model is shown in light blue. The regression lines were found to be significantly different (p<0.05) and the model for the post-intervention period performed better as it explains a higher proportion of total variability in test orders resulting in a variability in time (adjusted R2=71%) as compared with the preintervention period (adjusted R2=71%).
Figure 5Identification of high users. An example time series showing the monthly number of urine drug screen tests ordered per 100 patient visits for individual physicians (ie, physician-level data). The clinician report allowed the team to identify specific individuals who were higher than average users of the test and focus an intervention towards these individuals.