| Literature DB >> 35272667 |
Andrew Kouri1, Janet Yamada2, Jeffrey Lam Shin Cheung3, Stijn Van de Velde4, Samir Gupta5,6,7.
Abstract
BACKGROUND: Computerized clinical decision support systems (CDSSs) are a promising knowledge translation tool, but often fail to meaningfully influence the outcomes they target. Low CDSS provider uptake is a potential contributor to this problem but has not been systematically studied. The objective of this systematic review and meta-regression was to determine reported CDSS uptake and identify which CDSS features may influence uptake.Entities:
Mesh:
Year: 2022 PMID: 35272667 PMCID: PMC8908582 DOI: 10.1186/s13012-022-01199-3
Source DB: PubMed Journal: Implement Sci ISSN: 1748-5908 Impact factor: 7.327
CDSS uptake features
| 1. Was there a formal process to identify barriers and enablers to current behaviour prior to the CDSS study? e.g. mapping barriers and enablers to intervention components | |
| 2. Was there a previous study using a CDSS targeting the same primary outcome as the current study, and was that outcome significantly improved by CDSS? | |
| 3. Is there a gap between the desired and the baseline clinical behaviour identified by study authors? | |
| 4. Has the availability and quality of the patient data needed to inform the CDSS been formally evaluated? e.g. chart review, validation of patient-facing electronic questionnaires | |
| 5. Does use of the CDSS enable improvement of the quality of patient data compared to current standard of care? e.g. electronic collection of data, including patient-reported outcomes | |
| 6. Was there a formal pre-study evaluation of user perceptions that assessed informational needs and/or perceived benefit to using CDSS, and if so, was it positive? | |
| 7. Was specific additional hardware (other than what was already present as part of usual care) required and available for the CDSS? | |
| 8. Does the use of CDSS negatively impact the function of existing information systems? e.g. causing new technical issues or slower electronic health record function | |
| 9. Was a formal workflow analysis conducted prior to formalization of the intervention and did it demonstrate intervention feasibility? | |
| 10. If a workflow analysis was performed, did it demonstrate that baseline workflow would allow the introduction of the CDSS? | |
| 11. Are developers from an academic centre and report no significant conflict of interest? | |
| 12. Is the CDSS advice based on disease-specific guidelines? | |
| 13. Does the CDSS present its reasoning and/or cite research evidence to the user at the time of advice? | |
| 14. Does the CDSS present the harms/benefits of provided guidance? | |
| 15. Was the CDSS pilot tested and was the accuracy of information specifically assessed? | |
| 16. Was there a post-study evaluation of users and were their information needs addressed? | |
| 17. Does the CDSS clearly explain/indicate why it was triggered for specific patients/situations? | |
| 18. Was there a pre- or post-study evaluation of users and was CDSS information/advice clear? | |
| 19. Is the CDSS advice available in the location and software system in which it will be implemented? | |
| 20. Does the CDSS advice contradict any current guidelines? | |
| 21. Were there any issues with the amount of decision support delivered if the CDSS was pilot tested? | |
| 22. Was there a formal usability evaluation performed for the CDSS and was it found to be usable? | |
| 23. Was there a pre- or post-study evaluation of users and was workflow facilitation found to be positive? | |
| 24. Can the system be customized to provide improved user functionality? | |
| 25. Is the system always up and running? | |
| 26. Was there a pre- or post-study evaluation of users and was the advice delivery format found to be appropriate? | |
| 27. Was there a pre- or post-study evaluation of users and was the visual display/graphic design of CDSS advice found to be appropriate? | |
| 28. If the CDSS used specific functions for prioritized decision support (i.e. pop-ups), were they pilot tested or assessed in a post-study evaluation? | |
| 29. Does the CDSS provide advice directly to users who will be making the relevant clinical decisions? | |
| 30. Does CDSS facilitate collaboration between healthcare providers? | |
| 31. Does the CDSS provide advice at the moment and point-of-need? | |
| 32. Was information about the CDSS available to users (i.e. practical instructions)? | |
| 33. Are dedicated personnel and/or web- or paper-based resources available to CDSS users for technical support (i.e. help desk, tech support)? | |
| 34. Was user training provided for the CDSS? | |
| 35. Were other barriers to the behaviour changes being targeted by the CDSS discussed (i.e. medication costs), and if so were strategies implemented to address those barriers? | |
| 36. Was CDSS implemented in temporal steps? | |
| 37. Was CDSS usage and performance evaluated during the study? | |
| 38. If CDSS usage and performance was monitored during the study, were there strategies in place to fix any identified problems? | |
| 39. Were local users consulted during the intervention planning or implementation? | |
| 40. Was there discussion of an overall strategy (i.e. Knowledge Translation strategy) to guide the CDSS initiative? | |
| 41. Was the CDSS developer involved in authorship of the study? | |
| 42. Was CDSS advice provided automatically in the practitioner’s workflow? | |
| 43. Did the CDSS provide advice for patients (i.e. educational material)? | |
| 44. Did the CDSS require a reason for override of use/recommendations? | |
| 45. Does the CDSS have a critiquing function for actions? (i.e. it activates after orders are entered, suggesting they should be cancelled or revised) | |
| 46. Does the practitioner have to enter data directly into the CDSS? | |
| 47. Does the CDSS provide advice or reminders directly to patients (i.e. independent of the clinician)? | |
| 48. Was the CDSS a commercial product? | |
| 49. Did practitioners receive advice directly through an electronic interface? | |
| 50. Did the CDSS target healthcare practitioners other than physicians? | |
| 51. Was periodic performance feedback provided in addition to patient-specific system advice? | |
| 52. Was there a co-intervention in the CDSS group? |
Fig. 1Flow diagram of the review process
Summary of characteristics of included studies (n = 55)
| Study characteristics | Number (%) |
|---|---|
| Trial design | |
| Cluster randomized trial | 31 (56.4) |
| Non-cluster randomized trial | 17 (30.9) |
| Non-randomized designs | 7 (12.7) |
| Setting | |
| Outpatient | 41 (74.6) |
| Inpatient | 8 (14,6) |
| Emergency department | 6 (10.9) |
| Study duration | |
| Months, mean (SD) | 10.3 (5.1) |
| Location | |
| USA | 36 (65.5) |
| Other | 19 (34.5) |
| Population | |
| Adult | 46 (83.6) |
| Paediatric | 9 (16.4) |
| Publication date | |
| 2010 or later | 42 (76.4) |
| Before 2010 | 13 (23.6) |
| Uptake type reporteda | |
| Patient level | 32 (58.2) |
| Event level | 16 (29.1) |
| Clinician level | 11 (20.0) |
aSome studies reported more than one level of uptake
Multivariable meta-regression model for CDSS uptake, controlling for uptake type (n = 58)
| CDSS features | Estimate (95% CI) | |
|---|---|---|
| Uptake type | ||
| Clinician | Ref | |
| Patient | − 1.09 (− 2.63, 0.45) | 0.17 |
| Event | − 0.85 (− 3.70, 0.01) | 0.34 |
| Feature 4: Has the availability and quality of the patient data needed to inform the CDSS been formally evaluated? | ||
| No (37/58) | Ref | |
| Yes (21/58) | 1.21 (0.24, 2.19) | |
| Feature 35: Were other barriers to the behaviour changes being targeted by the CDSS discussed (i.e. medication costs), and if so were strategies implemented to address those barriers? | ||
| No (51/58) | Ref | |
| Yes (7/58) | 1.64 (0.22, 3.07) | |
| Feature 15: Was the CDSS pilot tested and was the accuracy of information specifically assessed? | ||
| No (40/58) | Ref | |
| Yes (20/58) | − 0.96 (− 1.98, 0.05) | 0.06 |
| Feature 7: Was specific additional hardware (other than what was already present as part of usual care) required and available for the CDSS? | ||
| No (51/58) | Ref | |
| Yes (7/58) | 0.97 (− 0.40, 2.34) | 0.17 |
| Feature 44: Did the CDSS require reason for override of use/recommendations? | ||
| No (51/58) | Ref | |
| Yes (7/58 | 0.90 (− 0.57, 2.37) | 0.23 |
| Feature 52: Was there a co-intervention in the CDSS group? | ||
| No (37/58) | Ref | |
| Yes (21/58) | 0.47 (− 0.53, 1.48) | 0.36 |
| Feature 28: If the CDSS used specific functions for prioritized decision support (i.e. pop-ups), were they pilot tested or assessed in a post-study evaluation? | ||
| No (42/58) | Ref | |
| Yes (15/58) | 0.36 (− 0.60, 1.32) | 0.46 |
| NA (1/58) | ||
| Feature 38: If CDSS usage and performance was monitored during the study, were there strategies in place to fix any identified problems? | ||
| No (14/58) | Ref | |
| Yes (19/58) | 0.23 (− 0.74, 1.21) | 0.64 |
| NA (25/58) | ||