Literature DB >> 24910564

Clinical decision support: effectiveness in improving quality processes and clinical outcomes and factors that may influence success.

Elizabeth V Murphy1.   

Abstract

The use of electronic health records has skyrocketed following the 2009 HITECH Act, which provides financial incentives to health care providers for the "meaningful use" of electronic medical record systems. An important component of the "Meaningful Use" legislation is the integration of Clinical Decision Support Systems (CDSS) into the computerized record, providing up-to-date medical knowledge and evidence-based guidance to the physician at the point of care. As reimbursement is increasingly tied to process and clinical outcomes, CDSS will be integral to future medical practice. Studies of CDSS indicate improvement in preventive services, appropriate care, and clinical and cost outcomes with strong evidence for CDSS effectiveness in process measures. Increasing provider adherence to CDSS recommendations is essential in improving CDSS effectiveness, and factors that influence adherence are currently under study.

Keywords:  clinical decision support systems; clinical practice guidelines; computerized provider order entry; electronic health record; electronic medical records; meta-analysis of randomized controlled trials; preventive services guidelines; quality outcomes; quality process measure

Mesh:

Year:  2014        PMID: 24910564      PMCID: PMC4031792     

Source DB:  PubMed          Journal:  Yale J Biol Med        ISSN: 0044-0086


Introduction

The Health Information Technology for Economic and Clinical Health (HITECH) Act, from the 2009 American Recovery and Reinvestment Act (ARRA) legislation, provides financial incentives to hospitals and physician practices to adopt and make “meaningful use” of electronic health records (EHR) to improve the quality of patient care. There has been a rapid expansion of EHR use since the enactment of HITECH, with an increase from 48 percent EHR use in office-based practices in 2009 to 72 percent office-based practice use by 2012 [1]. An essential component of “meaningful use” is the development of EHRs that are capable of computerized physician order entry (CPOE) with clinical decision support systems (CDSS) that will integrate into workflow and facilitate clinical outcome objectives. Clinical decision support systems are still not widespread in the United States, and multiple strategies have been proposed to facilitate the expansion of decision support systems through local EHR systems or through a scalable, standards-based model that can be adapted to diverse EHR systems [2,3]. Although the recent HITECH Act has helped move us toward more widespread use of clinical decision support systems, the potential importance of such systems has been recognized for decades. In his 1968 editorial “Medical Records that Guide and Teach,” Weed asserted that “when large amounts of demographic data are developed, by means of a computer, a system could be developed whereby input of vital statistics on any patient would automatically result in an immediate print-out of his main demographic problems along with current approaches to their management” [4]. In Greenes’ groundbreaking work on the development of the Massachusetts General Hospital Utility Multi-Programming System (MUMPS) EHR system in 1969, he recognized that “the medical record … provides the primary means by which quality control, auditing of the medical care process, and research into the diagnosis and treatment of disease can be achieved” [5]. In 1970, “Primer for Writing Medical Data Base for the Clinical Decision Support System” was published by IBM’s elite Advanced Systems Development Division [6]. By 1976, McDonald employed the use of treatment suggestions with reasons, such as “add or increase antihypertensives because last diastolic blood pressure > 100 mmHg,” in a controlled crossover study that used computer-based record systems to print out suggestions for providers. McDonald argued that computerized decision support is needed to prevent errors of oversight in providing appropriate medical care due to “man’s inefficiency as a data processor” [7]. Since the 1980s, there has been a steady increase in scientific studies examining the use of computerized clinical decision support tools in quality of care outcomes [8]. Most of these studies have evaluated CDSS process outcomes, including the ability to facilitate the provider’s ordering of appropriate medications and preventive services and adherence to appropriate care/practice guidelines in the treatment of disease. A smaller number of studies have also attempted to evaluate the effect of CDSS on clinical outcomes, including morbidity and mortality as well as cost outcomes.

CDSS Effectiveness in Preventive Services Processes

Several higher quality studies, including multiple randomized controlled trials, as well as well-designed, quasi-experimental studies, have demonstrated the effectiveness of clinical decision support systems in increasing the appropriate use of preventive services. A 1993 randomized controlled trial compared the use of computer-generated reminders for providers to order preventive services versus an intervention using the same reminders with a required response including “done today,” “not applicable to this patient,” “patient refused,” or “next visit.” Intervention physicians complied at 61 percent vs. 49 percent (p = .0007) in controls for fecal occult blood testing and at 54 percent vs. 47 percent (p = .036) for ordering mammograms [9]. A VA cooperative study used computerized reminders for 13 standards of care, including smoking cessation counseling, diabetic foot and eye exams, hypertension counseling, lipid level measurement, and HbA1c and proteinuria testing in diabetics. The intervention group had a 5.5 percent increase in all standards of care (p = .002) over the control group and a 6 to 10 percent increase in diabetic foot, eye and proteinuria exams, smoking cessation counseling and pneumococcal vaccination (p = .04 to p < .001) compared with controls [10]. In a randomized controlled trial using computerized reminders for providers taking care of hospitalized patients, Dexter and colleagues assessed four appropriate preventive services that had not been ordered on admission. Computerized reminders resulted in a 35.8 percent order rate for pneumococcal vaccine vs. 0.8 percent in controls (p < .001), a 51.4 percent order rate for influenza vaccine vs. 1.0 percent in controls (p < .001), a 32.2 percent order rate for prophylactic Heparin vs. 18.9 percent (p < .001) in controls, and 36.4 percent order rate for prophylactic aspirin prescription at discharge vs. 27.6 percent rate (p < .001) in controls [11]. Electronic health records reminders were also found to be effective in increasing guideline-recommended osteoporosis diagnostic testing and treatment in older women who suffered fractures. At 6 months post fracture, reminders resulted in 51.5 percent of patients receiving recommended osteoporosis care vs. 5.9 percent in controls (p < .001) [12]. The effect of clinical decision support systems on prevention processes as well as clinical outcomes was evaluated in two studies that focused on the prevention of deep venous thrombosis in hospitalized and trauma patients. In a study using computer alerts with a requirement for a response, Kucher conducted a randomized controlled trial that resulted in 10 percent of intervention group patients vs. 1.5 percent of control patients being prescribed mechanical prophylaxis (p < .001) and 23.6 percent of the intervention patients vs. 13.0 percent of the control patients (p < .001) being prescribed pharmacologic prophylaxis. Clinical outcomes in the intervention group revealed a sizable decrease in the incidence of deep vein thrombosis and pulmonary embolism in the intervention group, with risk reduced for these outcomes by 41 percent (hazard ratio .59, p < .001), though mortality was not significantly different in the intervention and control groups by the end of the study period [13]. In a Johns Hopkins retrospective cohort study of mandatory clinical decision support for the prophylaxis of deep vein thrombosis in trauma patients, providers were required to fill in an electronic medical record-based questionnaire about the patient’s risk factors, and prophylaxis was initiated when indicated. Practice guideline compliance for prophylaxis increased from 62.2 percent at baseline to 99.5 percent by the end of the 3-year study period (p < .001), and there was an 83 percent decrease (p < .001) in preventable harm venous thromboembolism events (including mortality) over the study period [14]. Although the evidence for CDSS effectiveness in preventive services processes is strong, there are also examples of studies that did not produce significant improvement in process outcomes. In a 1994 randomized clinical trial at two inner city HMO sites, Burack and colleagues investigated the effectiveness of computerized reminders that were mailed to patients and/or placed in the patient’s medical records for physicians advising of the need for mammography referral. There was no effect from patient reminders at either site, and physician reminders or physician reminders combined with patient reminders had a significant effect only at the second site with 59 percent of women vs. 43 percent (P < 0.001) of women completing mammograms after physician reminders [15]. This study highlights some of the difficulties in assessing CDSS response since some of the CDSS recommendations were issued months before a mammogram was due, and many factors, including financial ability and the patient’s ability to negotiate with their providers, influenced whether mammograms were performed. In a 2009 RCT conducted by Fiks and colleagues, there was a small but insignificant effect from providing computerized reminders to physicians for administration of flu vaccine to pediatric asthma patients. There was no reason listed for failing to administer flu vaccine, however, and it is possible that the patients had already received the vaccine from another source or that vaccination was postponed for a specific reason [16].

CDSS Effectiveness in Appropriate Care Processes

Clinical Decision Support Systems have also been effective in increasing provider’s adherence to appropriate medical care/practice guidelines, including prescribing guidelines. β-blockers have demonstrated effectiveness in improving survival for patients with left heart failure. In an attempt to increase the appropriate use of β-blockers, Heidenreich et al. performed a randomized controlled trial attaching an electronic reminder statement regarding decreased mortality with the use of β-blockers to the echocardiogram report for left heart failure patients in the intervention group and no reminder in the control group. Seventy-four percent of the patients in the reminder group vs. 66 percent of patients in the control group were given β-blocker prescriptions (p < .002) [17]. In a prospective cohort study followed by a cluster randomized trial, a computerized decision support system (TREAT) targeted outcomes of appropriate antibiotic use. TREAT is a software program based on a causal probabilistic network (CPN) that predicts the most likely pathogen given the patient’s setting and condition. TREAT prescribed appropriate antibiotic use in 70 percent of cases vs. 57 percent of cases for physicians (p < .001.) When patients were treated based on TREAT advice on hospital wards, the odds ratio for receiving appropriate treatment was 3.40 compared with controls (95% CI = 2.25-5.14) [18]. In a randomized controlled trial using corollary order suggestions, Overhage and colleagues wrote computer prompts for corollary orders for 87 different tests and treatments. Examples of orders included blood drug levels if certain antibiotics or anti-seizure medications were ordered, liver or kidney function testing if drugs with potential toxicity to kidneys or liver were ordered, blood gases after ventilator setting changes, Prothrombin time after Coumadin is prescribed, and electrolytes after IV Furosemide. Intervention providers ordered the recommended tests 46.3 percent of the time vs. 21.9 percent in the control physician group (p < .0001) [19]. An integrative, immediate response clinical decision support system was tested in nursing home residents to support safe medication use and dosing in elderly patients with renal insufficiency. Sixty-two medications were entered into the computer, and intervention physicians were alerted about the creatinine clearance for the patient and appropriate maximum medication dose, maximum frequency of medication, and medications to avoid, given the patient’s abnormal renal function. Control physicians were alerted about the patient’s most recent creatinine level. Intervention and control physicians both prescribed similarly appropriate dosing in renal failure patients, but intervention physicians had a 2.4 relative risk (95% CI = 1.4, 4.4) of prescribing appropriate frequency of medications and a 2.6 relative risk (95% CI = 1.4, 5.0) for avoiding drugs that should not be used in renal failure, as well as a 1.8 risk relative risk for ordering appropriate renal function tests when indicated [20]. Chronic disease management is another potential area for use of clinical decision support. McCowan et al. developed a software program for managing asthma with input from a software engineer, a statistician, a general practitioner, an asthma nurse, and a pulmonologist. The software was then tested at local and national conferences by a large number of general practitioners and practice nurses. Randomly assigned intervention practices used the software with their patients over a 6-month period. Fewer intervention patients initiated practice consultations with their providers during this period, 22 percent in intervention vs. 34 percent in controls OR .59 (CI = .37-.95), and fewer intervention patients suffered acute asthma exacerbations, 8 percent vs. 17 percent, OR .43 (CI =.21-.85) [21]. Bell and colleagues evaluated adherence to National Asthma Education Prevention Guidelines among physicians caring for pediatric asthma patients and found that computerized reminders and alerts led to a 6 percent increase in prescriptions for controller medications in the intervention group (p = .006) and a 3 percent increase in the use of spirometry in urban practices (p = .04.) Having an up-to date asthma care plan increased 14 percent (p = .03) and use of spirometry increased by 6 percent (p = .003) in the intervention suburban practices [22]. A study of the Vermont Diabetes Information System, a diabetes registry and decision support system, used computer systems to track laboratory testing, including labs that monitor glucose control, kidney function, proteinuria, and cholesterol in order to monitor diabetes control and prevention of potential complications. Providers were given summaries of patient’s results with decision support, and patients were sent alerts for out-of-range tests and reminders of appointments. This study is important because it resulted in decreased probability of hospitalization, 0.17 versus 0.20 in controls (p = .01), and fewer emergency room visits in the intervention group compared with controls, at 0.27 vs. 0.36 (p < .0001), a 25 pecent reduction in ER visits. A statistically significant cost savings of 11 percent for hospitalizations and 27 percent for ER visits was also realized [23]. The Mobile Diabetes Intervention Study combined clinical decision support for community (non-academic) providers with mobile tracking of diabetic patients in the community and a physician patient communication portal. The maximum intervention included provider clinical decision support, patient monitoring using a mobile device with patient input about finger stick results, diet, and other issues. It also included a patient-based decision support system portal with a computer “coach” that texted the patient with information/feedback related to the input and encouragement and a patient-provider portal. The mean HbA1c level was 9.9 percent in the intervention group prior to this program and decreased by 1.9 percent (to 7.9 percent) in the intervention group over a year vs. controls decreasing .7 percent (p < .001.) This is clinically and statistically significant [24].

Meta-Analysis in Evaluation of Effectiveness

In a 2012 AHRQ Evidence Report/Technology Assessment on clinical decision support and knowledge management, Lobach et al. identified 15,176 citations, including 1,407 full text articles [8]. After the studies were evaluated for quality, 323 articles were abstracted for evaluation. One hundred forty-eight randomized controlled trials were used in a meta-analysis to evaluate for evidence of process or clinical outcome improvement and/or cost reduction, with clinical decision support. These findings were summarized in table form (abstracted here, in Table 1) in another publication [25].
Table 1

Summary of Evidence, by Outcome (abstracted from “Table. Summary of Evidence, by Outcome” (Bright TJ, et al.; 2012).

Outcome Evidence Strength Studies (Quality Rating), n Meta-Analysis Result for Outcomes (95% CI) Studies Included in the Meta- Analysis, n Other Substantial Findings
Length of stayLow6 (6 good)RR, 0.96 (0.88–1.05) favoring CDSS5Limited evidence that CDSSs that automatically delivered system-initiated recommendations to providers were effective or demonstrated a trend toward reducing length of stay
MorbidityModerate22 (13 good, 7 fair, 2 poor)RR, 0.88 (0.80–0.96) favoring CDSS16Modest evidence from academic and community inpatient and ambulatory settings that locally developed CDSSs that automatically delivered system-initiated recommendations to providers synchronously at the point of care were effective or demonstrated a trend toward reducing patient morbidity
MortalityLow7 (6 good, 1 fair)OR, 0.79 (0.54–1.15) favoring CDSS6Limited evidence that CDSSs integrated in CPOE or EHR systems that automatically delivered system-initiated recommendations to providers were effective or demonstrated a trend toward reducing patient mortality
Adverse eventsLow5 (3 good, 1 fair, 1 poor)RR, 1.01 (0.90–1.14) favoring control5Limited evidence from academic settings that CDSSs that delivered recommendations to providers synchronously at the point of care demonstrated an effect on reducing or preventing adverse events
Health care process measures. Recommended preventive care service ordered or completedHigh43 (20 good, 16 fair, 7 poor)OR, 1.42 (1.27–1.58) favoring CDSS25Strong evidence from studies conducted in academic, VA, and community inpatient and ambulatory settings that locally and commercially developed CDSSs that automatically delivered system-initiated recommendations to providers synchronously at the point of care and did not require a mandatory clinician response were effective at improving the appropriate ordering of preventive care procedures
Recommended clinical study ordered or completedModerate29 (16 good, 9 fair, 4 poor)OR, 1.72 (1.47–2.00) favoring CDSS20Modest evidence from studies conducted in academic and community inpatient and ambulatory settings that CDSSs integrated in CPOE or EHR systems and locally and commercially developed CDSSs that automatically delivered system-initiated recommendations to providers synchronously at the point of care and did not require a mandatory clinician response were effective at improving the appropriate ordering of clinical studies
Recommended treatment ordered or prescribedHigh67 (35 good, 24 fair, 8 poor)OR, 1.57 (1.35–1.82) favoring CDSS46Strong evidence from academic, community, and VA inpatient and ambulatory settings that locally and commercially developed CDSSs integrated in CPOE or EHR systems that automatically delivered system-initiated recommendations to providers synchronously at the point of care and did not require a mandatory clinician response were effective at improving appropriate treatment ordering or prescribing
The meta-analysis revealed strong evidence that clinical decision support can improve process outcomes, including increased preventive services with an odds ratio of 1.42 (95% CI = 1.27, 1.58) and increased ordering of appropriate medical treatment, odds ratio 1.57 (95% CI = 1.35, 1.82). There is moderate evidence that CDSS improves the ordering and completion of appropriate clinical studies, odds ratio of 1.72 (95% CI = 1.47, 2.00), and moderate evidence that CDSS can decrease morbidity, RR 0.88 (95% CI = 0.80, 0.96). There is poor strength of evidence that CDSS lowers mortality, costs, or adverse events, but there were many fewer studies in this area. It is also less likely that randomized controlled trials will demonstrate evidence of decreased mortality from chronic diseases or cancer, since this outcome would usually occur after several years. RCT studies usually do not continue for longer time periods because these studies are costly and work intensive. There is evidence, however, that many of the increased preventive services in the CDSS studies are correlated with decreased mortality.

What Features of CDSS Will Make It More Effective?

Another question that is addressed in the AHRQ Technology Assessment is: What are the features of CDSS that contribute to making it more effective [8]? Kawamoto and Lobach attempted to address this question in a 2005 study that included a meta-regression analysis of 70 studies with analysis of 15 factors identified as possibly being relevant to CDSS success in prior studies [26]. Features that were determined to be significant from this study are highlighted in Table 2. They included “support presented at the time of the decision, computer based support, support that included a recommendation rather than just an assessment and automatic provision of decision support as part of workflow.” It should be noted that the confidence intervals for some of these features were very broad, with one confidence interval including infinity.
Table 2

Features that Contribute to CDSS Recommendation Adherence, by Author.

Citation Study Type/Number of Studies in Analysis Features Evaluated Successful Features (evidence)/Recommendations
[26]Systemic Review of Randomized Controlled Trials/n=70Integration with charting or order entryAutomatic Provision of decision support as part of clinician workflow; OR 112 (12.9, infinity)
Computer-based generation of decision supportProvision at time and location of decision making; OR15.4 (1.3, 300.6)
Local user involvement in developmentProvision of a recommendation, not just an assessment; OR 7.1 (1.3, 49.0)
Clinician-system interactive featuresComputer-based generation of decision support; OR 6.3 (1.2,45)
Automatic Provision of decision support as part of clinician workflow
Provision at time and location of decision making
Request documentation of reason for not following system recommendations
Provision of a recommendation, not just an assessment
Promotion of action rather than inaction
Justification via provision of research evidence/reasoning
Provision of Decision Support results to both clinician and patient
CDSS accompanied by period performance feedback
CDSS accompanied by conventional education

[8]Meta-analysis of 91 randomized controlled trialsIntegration with charting or order entryAutomatic Provision of decision support as part of clinician workflow; OR 1.45 to 1.85*
Computer based generation of decision supportProvision at time and location of decision making; OR 1.35 to 1.78*
Local user involvement in developmentProvision of a recommendation, not just an assessment; OR 1.5 to 2.01*
Clinician-system interactive featuresIntegration with charting or order entry; OR 1.47 to 1.67*
Automatic Provision of decision support as part of clinician workflowNo need for additional clinician data entry; OR 1.43 to 1.78*
Provision at time and location of decision makingPromotion of action rather than inaction; OR 1.28 to 1.71*
Request documentation of reason for not following system recommendationsProvision of Decision Support results to both clinician and patient; OR 1.18 to 1.97*
Provision of a recommendation, not just an assessmentLocal user involvement in the development process; OR 1.45 to 1.90
Promotion of action rather than inaction
Justification via provision of research evidence/reasoning
Provision of Decision Support results to both clinician and patient
CDSS accompanied by period performance feedback
CDSS accompanied by conventional education
No need for additional clinician data entry

[27]Meta-regression analysis of 162 randomized controlled trialsPrimary Factor Set:Systems providing advice for patients in addition to practitioners; OR 2.77 (1.07 to 7.17)
Some of study’s authors are also system’s developersRequired practitioners to provide a reason for over-ride; OR 11.23 (1.98 to 63.72)
System provides advice automatically within practitioner’s workflowWere evaluated by their developers; OR 4.35 (1.66 to 11.44)
System provides advice at time of care
Advice presented in electronic charting or order entry systems
Provides advice for patients
Requires reason for over-ride

*depends on type of care intervention

In the AHRQ Technology Assessment, Lobach performed a meta-analysis on 91 studies that resulted in nine features associated with increased CDSS effectiveness [8]. The study features and outcome features are presented in Table 2. The successful features overlapped with those in the Kawamoto study, but added other features such as “no need for additional clinician data entry,” “provision of decision support to patients as well as providers,” and “local user involvement in the development process.” Odds ratios for effective features were between 1.18 and 3.00, with confidence intervals that were much narrower than in the Kawamoto study [26]. In 2013, Roshanov performed a meta-regression analysis on 162 randomized controlled trials in order to evaluate for features that were effective in CDSS, and his results were quite different from the Kawamoto [26] and Lobach [8] studies [27]. In the prior two meta-analysis studies, if it was not stated that a factor was present, it was assumed not to be present. In Roshanov’s study, authors were contacted and asked whether missing features were present. Roshanov also found problems with the design of previous studies citing spuriously favorable results by testing “more factors than their study sample size could reliably support.” He opines that in order to test 15 primary factors, “Kawamoto would have required 460 studies to reliably test (the factors).” Roshanov limited his study to six primary factors for 162 RCT studies, and only three of these factors were favorable (Table 2). As in Lobach’s 2012 study, systems that provided advice to patients as well as practitioners were more likely to be effective, OR 2.77 (CI = 1.07,7.17). Systems that required practitioners to supply reasons for overturning advice were also likely to be effective, OR 11.23 (CI = 1.98, 63.72). He also notes that systems that were evaluated by their own developers were more likely to be effective, OR 4.35 (CI = 1.66, 11.44). Roshanov opines that this may be due to bias by the system developers, and he encourages third party evaluation of these systems. Surprisingly, systems that presented advice in the order entry system interface were less likely to be effective (OR 0.37 (CI = .17, .80). Lobach responded to this paradoxical result in Roshanov’s study in an editorial titled, “The Road to Effective Clinical Decision Support: Are We There Yet?” His answer was “no,” and he expressed particular concern that there was an appearance of an adverse effect from electronic reminders at the point of care. He asks whether the negative association is due to “alert fatigue,” “integrated systems (being) too distractive,” or the “systems being user hostile in some other way” [29].

Conclusions and Outlook

While it is becoming increasingly clear that clinical decision support is effective in improving clinical processes, it may take some time to fully realize clinical decision support’s potential in improving health care quality and outcomes. Though most of the processes that are improved through CDSS, such as adherence to mammography or colonoscopy guidelines, have known effectiveness in improving clinical outcomes, further research is needed to assure that CDSS is applying guidelines appropriately and that better clinical outcomes are realized. Research on clinical outcomes and cost is much less mature than the research on process outcomes, but clinical outcome studies have increased in recent years. Long-term prospective cohort studies or well-designed retrospective cohort studies may give more information on CDSS effects on morbidity and mortality in the future, as most of the current studies do not assess long-term outcomes. It will likely take more trial and error in order to optimize the penetration of clinical decision support recommendations into actual provider practices. CDSS recommendations are most often derived from evidence-based practice guidelines, and increasing provider adherence to practice guidelines has been an ongoing challenge [28]. There is continuing uncertainty regarding the most effective means for advancing evidence through decision support systems, and this is an important area for future research. In the “Ten Commandments for Effective Clinical Decision Support,” Bates argues that making CDS systems user-friendly and well-integrated into the work flow, with ongoing knowledge updates, will lead to great advances in evidence-based medicine [30]. Sittig’s “five rights” for clinical decision support include the “right information, to the right person, in the right format, through the right channel, at the right point in workflow” [31]. Kawamoto and Lobach [26] proposed many variables supporting clinical decision support effectiveness, and these were later refined to nine factors for success [8], but these theories have all been brought into question by Roshanov’s findings that success is most strongly correlated to requiring providers to provide an explanation for failing to comply with recommendations and giving the support advice directly to patients [27]. None of these studies considered financial incentives and the possible impact of the mandates within the Meaningful Use Clinical Quality Measures to affect adherence to clinical decision support recommendations. Many of the Meaningful Use measures are processes that were improved by clinical decision support systems in randomized controlled trials, so implementation and reporting on these measures has good potential to facilitate compliance with decision support systems recommendations. Roshanov’s findings suggest that physician compliance with decision support is better when there is an outside incentive, such as saving time and possible audit by choosing a recommendation rather than explaining why you did not choose it or maintaining the respect and trust of your patients by complying with their knowledge-based requests regarding their care. Perhaps the monetary incentive in Meaningful Use will also be effective in changing provider behavior. Will the “right incentive” be added to the five rights? Finally, the evidence from numerous well-designed studies is starting to confirm the importance of patient-centered care and the patient as a partner in improving health care quality through clinical decision support. Following the meta-analysis of dozens of randomized controlled trials, both Lobach [8] and Roshanov [27] found that providing clinical decision support advice to patients in addition to providers resulted in increased adherence to CDS recommendations (OR 1.78 to 2.77). Perhaps we have underestimated our patient’s ability to understand the underlying information and evidence in practice guidelines and to advocate for the quality of their care. Patient-directed clinical decision support may well be an important frontier in improving health care quality and is at least worthy of more focused exploration.
  30 in total

Review 1.  Why don't physicians follow clinical practice guidelines? A framework for improvement.

Authors:  M D Cabana; C S Rand; N R Powe; A W Wu; M H Wilson; P A Abboud; H R Rubin
Journal:  JAMA       Date:  1999-10-20       Impact factor: 56.272

2.  The effect of the Vermont Diabetes Information System on inpatient and emergency room use: results from a randomized trial.

Authors:  Shamima Khan; Charles D Maclean; Benjamin Littenberg
Journal:  Health Outcomes Res Med       Date:  2010-07

3.  Impact of electronic health record-based alerts on influenza vaccination for children with asthma.

Authors:  Alexander G Fiks; Kenya F Hunter; A Russell Localio; Robert W Grundmeier; Tyra Bryant-Stephens; Anthony A Luberti; Louis M Bell; Evaline A Alessandrini
Journal:  Pediatrics       Date:  2009-07       Impact factor: 7.124

4.  The road to effective clinical decision support: are we there yet?

Authors:  David F Lobach
Journal:  BMJ       Date:  2013-03-13

5.  Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials.

Authors:  Pavel S Roshanov; Natasha Fernandes; Jeff M Wilczynski; Brian J Hemens; John J You; Steven M Handler; Robby Nieuwlaat; Nathan M Souza; Joseph Beyene; Harriette G C Van Spall; Amit X Garg; R Brian Haynes
Journal:  BMJ       Date:  2013-02-14

6.  Electronic health record-based decision support to improve asthma care: a cluster-randomized trial.

Authors:  Louis M Bell; Robert Grundmeier; Russell Localio; Joseph Zorc; Alexander G Fiks; Xuemei Zhang; Tyra Bryant Stephens; Marguerite Swietlik; James P Guevara
Journal:  Pediatrics       Date:  2010-03-15       Impact factor: 7.124

7.  Improving residents' compliance with standards of ambulatory care: results from the VA Cooperative Study on Computerized Reminders.

Authors:  J G Demakis; C Beauchamp; W L Cull; R Denwood; S A Eisen; R Lofgren; K Nichol; J Woolliscroft; W G Henderson
Journal:  JAMA       Date:  2000-09-20       Impact factor: 56.272

8.  Electronic alerts to prevent venous thromboembolism among hospitalized patients.

Authors:  Nils Kucher; Sophia Koo; Rene Quiroz; Joshua M Cooper; Marilyn D Paterno; Boris Soukonnikov; Samuel Z Goldhaber
Journal:  N Engl J Med       Date:  2005-03-10       Impact factor: 91.245

9.  Requiring physicians to respond to computerized reminders improves their compliance with preventive care protocols.

Authors:  D K Litzelman; R S Dittus; M E Miller; W M Tierney
Journal:  J Gen Intern Med       Date:  1993-06       Impact factor: 5.128

Review 10.  Enabling health care decisionmaking through clinical decision support and knowledge management.

Authors:  David Lobach; Gillian D Sanders; Tiffani J Bright; Anthony Wong; Ravi Dhurjati; Erin Bristow; Lori Bastian; Remy Coeytaux; Gregory Samsa; Vic Hasselblad; John W Williams; Liz Wing; Michael Musty; Amy S Kendrick
Journal:  Evid Rep Technol Assess (Full Rep)       Date:  2012-04
View more
  32 in total

1.  The Accuracy of an Electronic Pulmonary Embolism Severity Index Auto-Populated from the Electronic Health Record: Setting the stage for computerized clinical decision support.

Authors:  D R Vinson; J E Morley; J Huang; V Liu; M L Anderson; C E Drenten; R P Radecki; D K Nishijima; M E Reed
Journal:  Appl Clin Inform       Date:  2015-05-13       Impact factor: 2.342

2.  Information Needs and Requirements for Decision Support in Primary Care: An Analysis of Chronic Pain Care.

Authors:  Christopher A Harle; Nate C Apathy; Robert L Cook; Elizabeth C Danielson; Julie DiIulio; Sarah M Downs; Robert W Hurley; Burke W Mamlin; Laura G Militello; Shilo Anders
Journal:  AMIA Annu Symp Proc       Date:  2018-12-05

Review 3.  Improving the Patient-Clinician Interface of Clinical Trials through Health Informatics Technologies.

Authors:  Jake Carrion
Journal:  J Med Syst       Date:  2018-05-29       Impact factor: 4.460

4.  Measuring implementation feasibility of clinical decision support alerts for clinical practice recommendations.

Authors:  Rachel L Richesson; Catherine J Staes; Brian J Douthit; Traci Thoureen; Daniel J Hatch; Kensaku Kawamoto; Guilherme Del Fiol
Journal:  J Am Med Inform Assoc       Date:  2020-04-01       Impact factor: 4.497

5.  Linking Reminders and Physician Breast Cancer Screening Recommendations: Results From a National Survey.

Authors:  Elizabeth J Siembida; Archana Radhakrishnan; Sarah A Nowak; Andrew M Parker; Craig Evan Pollack
Journal:  JCO Clin Cancer Inform       Date:  2017-11

6.  An electronic health record-based interoperable eReferral system to enhance smoking Quitline treatment in primary care.

Authors:  Michael Fiore; Rob Adsit; Mark Zehner; Danielle McCarthy; Susan Lundsten; Paul Hartlaub; Todd Mahr; Allison Gorrilla; Amy Skora; Timothy Baker
Journal:  J Am Med Inform Assoc       Date:  2019-08-01       Impact factor: 4.497

7.  The Natural Course of Adolescent Depression Treatment in the Primary Care Setting.

Authors:  Allison McCord Stafford; Tamila Garbuz; Dillon J Etter; Zachary W Adams; Leslie A Hulvershorn; Stephen M Downs; Matthew C Aalsma
Journal:  J Pediatr Health Care       Date:  2019-09-21       Impact factor: 1.812

8.  Achieving a High-Quality Cancer Care Delivery System for Older Adults: Innovative Models of Care.

Authors:  Janet H Van Cleave; Esther Smith-Howell; Mary D Naylor
Journal:  Semin Oncol Nurs       Date:  2016-03-03       Impact factor: 2.315

9.  Association between Health Information Technology and Case Mix Index.

Authors:  Young-Taek Park; Junsang Lee; Jinhyung Lee
Journal:  Healthc Inform Res       Date:  2017-10-31

10.  Improved efficiency of coding systems with health information technology.

Authors:  Jinhyung Lee; Jae-Young Choi
Journal:  Sci Rep       Date:  2021-05-13       Impact factor: 4.379

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.