Literature DB >> 32274495

A system uptake analysis and GUIDES checklist evaluation of the Electronic Asthma Management System: A point-of-care computerized clinical decision support system.

Jeffrey Lam Shin Cheung1, Natalie Paolucci1, Courtney Price1, Jenna Sykes2, Samir Gupta1,2.   

Abstract

OBJECTIVE: Computerized clinical decision support systems (CCDSSs) promise improvements in care quality; however, uptake is often suboptimal. We sought to characterize system use, its predictors, and user feedback for the Electronic Asthma Management System (eAMS)-an electronic medical record system-integrated, point-of-care CCDSS for asthma-and applied the GUIDES checklist as a framework to identify areas for improvement.
MATERIALS AND METHODS: The eAMS was tested in a 1-year prospective cohort study across 3 Ontario primary care sites. We recorded system usage by clinicians and patient characteristics through system logs and chart reviews. We created multivariable models to identify predictors of (1) CCDSS opening and (2) creation of a self-management asthma action plan (AAP) (final CCDSS step). Electronic questionnaires captured user feedback.
RESULTS: Over 1 year, 490 asthma patients saw 121 clinicians. The CCDSS was opened in 205 of 1033 (19.8%) visits and an AAP created in 121 of 1033 (11.7%) visits. Multivariable predictors of opening the CCDSS and producing an AAP included clinic site, having physician-diagnosed asthma, and presenting with an asthma- or respiratory-related complaint. The system usability scale score was 66.3 ± 16.5 (maximum 100). Reported usage barriers included time and system accessibility. DISCUSSION: The eAMS was used in a minority of asthma patient visits. Varying workflows and cultures across clinics, physician beliefs regarding asthma diagnosis, and relevance of the clinical complaint influenced uptake.
CONCLUSIONS: Considering our findings in the context of the GUIDES checklist helped to identify improvements to drive uptake and provides lessons relevant to CCDSS design across diseases.
© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association.

Entities:  

Keywords:  GUIDES checklist; asthma; computerized clinical decision support system; electronic medical record system; primary care

Mesh:

Year:  2020        PMID: 32274495      PMCID: PMC7309244          DOI: 10.1093/jamia/ocaa019

Source DB:  PubMed          Journal:  J Am Med Inform Assoc        ISSN: 1067-5027            Impact factor:   4.497


INTRODUCTION

Background and Significance

Asthma is a chronic disease affecting 334 million people globally, with a U.S. annual economic burden exceeding $80 billion in 2013. Asthma can be effectively controlled in most patients by combining pharmacotherapeutic and nonpharmacotherapeutic strategies. However, up to 59% of patients with asthma remain poorly controlled. This discrepancy is largely attributable to gaps between evidence-based asthma guidelines and real-world practice. With the growing use of electronic medical record (EMR) systems, integrated, point-of-care computerized clinical decision support systems (CCDSSs) have been promoted as a promising strategy for improving health care quality. Systematic reviews have shown CCDSSs to be effective in improving practitioner performance; however, simply developing and providing such technology does not ensure its uptake. CCDSS uptake is variable and often suboptimal, and predictors of uptake are poorly defined. Resultantly, it is difficult to determine the optimal CCDSS design and implementation strategy to maximize uptake and performance outcome.

OBJECTIVE

To address asthma care gaps, we developed and tested the Electronic Asthma Management System (eAMS)—a point-of-care CCDSS. Herein, we aimed to describe the nature and frequency of system usage, predictors of usage, and clinician feedback. Applying these data, we sought to evaluate the eAMS using the recently published GUIDES checklist—an instrument designed to support successful CCDSS implementation—as a framework to identify areas for system improvement.

Materials and METHODS

Study setting

We conducted a prospective cohort study, measuring use of the eAMS over 12 months in a convenience sample of 3 primary care sites: 2 academic family health teams in Hamilton, Ontario, Canada (population: 536 917) (sites 1 and 2), and 1 nonacademic, community-based team in Brampton, Ontario, Canada (population: 593 638) (site 3). All clinics used the OSCAR EMR system (http://oscarcanada.org), were under a capitated funding model, and had no asthma educators on site. The study protocol was approved by the St. Michael’s Hospital Research Ethics Board (REB 10-052) and the McMaster Research Ethics Board (REB 11-363), and each clinician provided written informed consent. A champion primary care physician at each site invited all physicians and nurse practitioners (NPs) to participate. The intervention was available to each consenting clinician’s patients with asthma ≥16 years of age. Patients with asthma were identified by running a validated EMR system search algorithm on each consenting clinician’s patient roster list, and then asking each clinician to vet the generated list for omissions or errors.

System design

Key asthma care gaps include (1) failure to assess asthma control according to guideline criteria (to identify poorly controlled patients requiring more intensive management), (2) inadequate guideline-based pharmacotherapy adjustment for current control level, and (3) failure to produce a self-management asthma action plan (AAP)—an individualized plan created by a healthcare provider that outlines strategies for self-managing acute loss of asthma control.,, The eAMS consists of 3 components: (1) a patient-facing tablet-based questionnaire that collects asthma-related information, (2) an EMR notification that alerts clinicians if patient recommendations are available, and (3) an EMR-integrated 5-screen CCDSS. All actions are also documented in an automatically transcribed chart note.

Patient questionnaire

Patients with asthma (identified through the search algorithm, as previously described) were instructed to complete the 5- to 10-minute questionnaire on a tablet device provided by a receptionist or liaison in the clinic waiting room before their visit. The questionnaire assessed the patient’s asthma control level (using guideline-recommended symptom-based criteria); medication use (including dose/frequency); and details required to personalize the AAP (eg, patient-specific symptoms, activities, triggers, allergies). An embedded message also asked patients to prompt the clinician to provide the AAP (the AAP was described as a 1-page, personalized document providing instructions on what to do if their asthma got worse [ie, patient-mediated behavior change]). Before launch, the questionnaire’s content and usability were evaluated and optimized through serial focus group testing involving asthma patients with varying touch-device experience.,

EMR notification

Upon opening the EMR chart, a notification appeared automatically and alerted the clinician to the patient’s asthma status. If a patient had not completed the questionnaire, the notification simply reminded the clinician to have the patient complete it; if they had completed it, it indicated the patient’s asthma control level and invited the clinician to click therein to open the CCDSS (which was available for 28 days after each time the patient completed the questionnaire) (Figure 1). This notification disappeared automatically after 10 seconds, or could be dismissed immediately with 1 click. However, the system remained accessible through a widget on the EMR screen.
Figure 1.

Computerized clinical decision support system (CCDSS) screenshots. (A) Upon opening the patient’s electronic chart in OSCAR, the clinician is presented with a notification which describes the patient’s asthma control level and prompts the clinician to click to open the CCDSS. (B) Screen 1 presents the patient’s asthma control level and allows the clinician to use drop-down menus to confirm current medications, as entered by the patient in the questionnaire. (C) Screen 2 provides a guideline-based recommendation for any required changes to baseline medications. (D) Screen 3 provides a guideline-based recommendation for “step-up” therapy for acute loss of asthma control (the action to take in the asthma action plan “yellow zone”). (E) Screen 4 presents the clinician with an auto-generated asthma action plan based on patient input in the questionnaire and the medications approved in screens 2 and 3. (F) Screen 5 confirms that the CCDSS has been completed and prompts the clinician to provide the patient with the asthma action plan, any new required prescriptions, educational resources, and a follow-up appointment.

Computerized clinical decision support system (CCDSS) screenshots. (A) Upon opening the patient’s electronic chart in OSCAR, the clinician is presented with a notification which describes the patient’s asthma control level and prompts the clinician to click to open the CCDSS. (B) Screen 1 presents the patient’s asthma control level and allows the clinician to use drop-down menus to confirm current medications, as entered by the patient in the questionnaire. (C) Screen 2 provides a guideline-based recommendation for any required changes to baseline medications. (D) Screen 3 provides a guideline-based recommendation for “step-up” therapy for acute loss of asthma control (the action to take in the asthma action plan “yellow zone”). (E) Screen 4 presents the clinician with an auto-generated asthma action plan based on patient input in the questionnaire and the medications approved in screens 2 and 3. (F) Screen 5 confirms that the CCDSS has been completed and prompts the clinician to provide the patient with the asthma action plan, any new required prescriptions, educational resources, and a follow-up appointment.

Computerized clinical decision support system

As eAMS access was chart-based, the CCDSS was available for use by any clinician who saw any patient with asthma ≥16 years of age (patients occasionally see clinicians other than their most responsible physician [MRP] for urgent issues). Screen 1 of the CCDSS presented asthma control criteria (including a calculated control status) and medication use as indicated in the patient questionnaire. Clinicians were asked either to click to confirm or to update the patient’s current medications using drop-down menus. Screen 2 presented guideline-based recommendations for escalating or de-escalating therapy (based on asthma control level and medication[s] confirmed in screen 1), according to the Canadian Asthma Guidelines. Again, buttons allowed clinicians to confirm or reject recommendations altogether, and drop-down fields allowed clinicians to alter recommendations, including eliminating or adding new medications. Screen 3 presented evidence-based therapeutic adjustment recommendations for acute loss of asthma control (ie, the AAP “Yellow Zone”). Clinicians could accept recommendations with 1 click, choose to defer the AAP process and close the window immediately, or adjust recommended medication(s) using drop-down fields. Screen 4 displayed the auto-generated personalized AAP, including both patient-entered specifications from the questionnaire and all medications approved by the clinician in the CCDSS. Clinicians could make text edits and click to defer approval or approve and save the AAP. A concluding screen 5 provided a reminder to print and deliver the newly approved AAP, and to provide patients with any required prescriptions, a follow-up appointment, and a preprinted sticky note with a URL to a self-directed asthma education website. See Figure 1 for CCDSS screenshots. Upon closing the CCDSS, a chart note detailing all actions and required prescriptions was auto-written to the EMR and the AAP was saved as a chart document, from where it could be printed (patients with a personalized health record also automatically received an electronic copy). If the chart was opened by someone other than the MRP, the MRP received an automated email the next morning that described the patient’s control status and all CCDSS actions taken (if any), and prompted any remaining actions. CCDSS logic was developed through a review of asthma guidelines, systematic development of evidence-based rules for AAP auto-population, and application of latest evidence to optimize the implementability of provided guidance. The AAP populated by the CCDSS was built through systematic evaluation of existing AAPs, multiple-stakeholder wiki-based collaborative editing, and usability optimization., A prototype CCDSS user interface was initially developed by content experts on the research team (including a primary care physician). This interface was then improved through serial feedback from the 3 primary care site leads. The OSCAR EMR integrations (EMR notification, automated chart note, automated email) were developed collaboratively with 1 site lead who was an OSCAR EMR expert and improved through serial feedback from the other 2 primary care site leads. This process involved (1) acquiring and incorporating feedback on the content of each CDSS screen through serial iteration (through email and teleconferences), (2) acquiring and incorporating feedback on the format and usability of each CDSS screen by presenting users with low-fidelity prototypes that were then serially iterated (through in-person meetings), (3) developing each OSCAR EMR integration element (EMR notification [content wording]; pop-up size, location, and functionality; automated chart note [timing of release to chart, content wording]; automated email [triggers for release, timing of release, content wording]) collaboratively with a site lead and an OSCAR programmer (through in-person meetings), and (4) acquiring and incorporating feedback on each OSCAR EMR integration element from the other 2 primary care site leads through serial iteration (through email and teleconferences).

System implementation

We provided 2-4 orientation presentations to clinicians at each site (at established clinic rounds and meetings) before launch and 2-3 additional presentations at each site within 6-12 months after launch, owing to clinician turnover (particularly resident turnover at academic sites). One week before launch, all clinicians received a pamphlet explaining eAMS features and an email link to a study website including an online user guide, FAQs, a downloadable and printable brochure, and educational videos. This link was re-sent 4 months after launch. Clinicians were also emailed monthly usage statistics highlighting the user who created the most AAPs at each site. We encouraged clinicians to send study personnel any questions about the system and to report any glitches requiring repair.

Data collection and outcomes

We reviewed all clinic visits by medical doctors (MDs), residents, NPs, or physician assistants (PAs) during which an EMR notification prompting decision support action was delivered (ie, all ambulatory clinic outpatient visits involving a patient with asthma ≥16 years of age who had completed the questionnaire). The eAMS recorded all clinician actions in the CCDSS. Additional data were collected through an electronic chart audit. Data were entered in a standardized Excel database. The primary outcome was system usage, defined as (1) the proportion of visits in which the decision support was opened (first step of the CCDSS) and (2) the proportion of visits in which an AAP was produced (final CCDSS action). We abstracted the following variables, determined a priori, as possible usage predictors: clinic; visit time; presenting complaint type; appointment provider type (MD, NP, PA, or resident, and MRP vs non-MRP); whether the patient had an objective or documented clinical diagnosis of asthma, a prior emergency department visit, or hospitalization for asthma; and the patient’s current asthma control level according to questionnaire responses. We also modeled physician sex and years in practice. We measured the proportion of clinicians who used the CCDSS at least once, patients who had the CCDSS opened at least once, and patients on an asthma controller mediation who received an AAP. We analyzed clinician behavior in each CCDSS screen and whether clinicians created new prescriptions when required. At the study’s conclusion, we invited clinicians to provide system feedback through an online survey including Likert-scale questions, open-ended questions, and the system usability scale (SUS).

Analysis

The probabilities of the CCDSS opening and AAP production were fit using a generalized estimating equation, to account for patients who had multiple visits during the study period. Included variables are described previously. After testing for univariable associations, we used a backward selection algorithm to determine the final multivariable models. All variables were entered into the model and removed stepwise until only variables significant at P < .05 remained. All P values are 2-sided and assessed at P < .05 unless otherwise stated. Clinician behavior in each CCDSS screen, feedback, and SUS scores are described through proportions and means. Statistical analyses were performed using R version 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria).

RESULTS

We approached 37 physicians and 3 NPs to participate in our study, and successfully recruited (and received permission for chart analysis) from 18 of 37 (48.6%) physicians and 1 of 3 (33.3%) NPs. Among these 19 recruited clinicians, there were 490 patients with asthma who had eligible visits (visits in which an EMR notification prompting decision support action was delivered). Because patients are often seen by clinicians other than their MRP, these 490 patients were seen by 121 different clinicians during the study period: 42 MDs (31 [73.8%] women; in practice for 17.3 ± 12.2 [range, 2-52] years), 14 NPs, 3 PAs, and 62 residents, with more than 1033 eligible visits. The median number of visits per patient was 1 (range, 1-13).

System usage

Clinicians opened the CCDSS in 205 of 1033 (19.8%) possible instances in which CCDSS recommendations were available, including 139 (67.8%) times on the day of the patient visit, and 20 (9.8%) times 1 day subsequent to the visit. Among these 1033 instances, an AAP was produced 121 (11.7%) times. During the study period, each clinician opened the CCDSS at least once, and 168 of 490 (34.3%) patients had their decision support opened at least once as part of their care. Figure 2 shows clinician progression and behavior in each CCDSS screen.
Figure 2.

Clinician flow through the computerized clinical decision support system (CCDSS) and behavior in each screen. aDenominator does not include the 4 instances in which the clinician clicked the “patient does not have asthma” button in screen 1, as the system closed the window immediately in these cases. bDenominator does not include the 6 instances in which the CCDSS was unable to make medication change recommendations. cDoes not count 5 visits in which the system could not generate an asthma action plan because the clinician had removed the recommended yellow zone controller medication, and the patient was not on a reliever medication. AAP: asthma action plan.

Clinician flow through the computerized clinical decision support system (CCDSS) and behavior in each screen. aDenominator does not include the 4 instances in which the clinician clicked the “patient does not have asthma” button in screen 1, as the system closed the window immediately in these cases. bDenominator does not include the 6 instances in which the CCDSS was unable to make medication change recommendations. cDoes not count 5 visits in which the system could not generate an asthma action plan because the clinician had removed the recommended yellow zone controller medication, and the patient was not on a reliever medication. AAP: asthma action plan. Among 116 visits in which the approved “green zone” medication (the “green zone” of the AAP describes the baseline stable state in which the patient’s asthma is under control) or “yellow zone” medication (the “yellow zone” describes a period of asthma worsening requiring a step-up in therapy) in the CCDSS required a new prescription, clinicians generated the prescription(s) in only 43 (37.1%) cases.

Predictors of system usage

Univariate predictors of opening the CCDSS and producing an AAP are presented in Tables 1 and 2, respectively. After backward variable selection, the following predictors of opening the CCDSS remained in the multivariable model: clinic site (site 2 vs site 1 [odds ratio (OR), 2.28; 95% CI, 1.36-3.82]; site 3 vs site 1 [OR, 2.86; 95% CI, 1.95-4.22]), having a physician diagnosis of asthma (OR, 1.84; 95% CI, 1.18-2.86), and nature of presenting complaint (asthma vs nonrespiratory [OR, 3.21; 95% CI, 1.83-5.64]; respiratory [nonasthma] vs nonrespiratory [OR, 3.36; 95% CI, 2.17-5.20]). Similarly, multivariable model predictors of producing an AAP were clinic site (site 2 vs site 1 [OR, 3.41; 95% CI, 1.73-6.72]; site 3 vs site 1 [OR, 4.67; 95% CI, 2.90-7.52]), having a physician diagnosis of asthma (OR, 3.87; 95% CI, 1.95-7.71), and nature of presenting complaint (asthma vs nonrespiratory [OR, 4.15; 95% CI, 2.20-7.82]; respiratory [nonasthma] vs nonrespiratory [OR, 4.14; 95% CI, 2.48-6.89]).
Table 1.

Univariable predictors of clinicians opening the CCDSS

Did not open CCDSS (n = 828 visits)Opened CCDSS (n = 205 visits)Odds ratio (95% CI) P value
Primary care clinic
 Site 1536 (87.0)80 (13.0)
 Site 2118 (76.1)37 (23.9)2.22 (1.37-3.58).001
 Site 3174 (66.4)88 (33.6)3.49 (2.36-5.14)<.001
Appointment provider type
 Physician389 (76.4)120 (23.6)
 Nurse practitioner63 (71.6)25 (28.4)1.26 (0.74-2.12).40
 Physician assistant35 (70.0)15 (30.0)1.30 (0.68-2.48).43
 Resident341 (88.3)45 (11.7)0.46 (0.32-0.65)<.001
Objective diagnosis of asthma
 Yes124 (78.5)34 (21.5)
 No704 (80.5)171 (19.5)0.85 (0.53-1.37).52
Physician diagnosis of asthma
 Yes598 (77.8)171 (22.2)
 No230 (87.1)34 (12.9)0.53 (0.35-0.80).003
Presenting complaint
 Nonrespiratory711 (85.6)120 (14.4)
 Asthma39 (56.5)30 (43.5)4.15 (2.34-7.34)<.001
 Respiratory (nonasthma)78 (58.6)55 (41.4)3.97 (2.59-6.08)<.001
Years in practicea14.5 (2-33)16 (2-52)1.00 (0.98-1.02).78
Appointment provider sexa
 Female263 (74.5)90 (25.5)
 Male126 (80.8)30 (19.2)0.72 (0.44-1.17).18
Previous ED visit or hospitalization for asthma
 Yes57 (74.0)20 (26.0)
 No771 (80.6)185 (19.4)0.68 (0.37-1.23).21
Seen by the MRP
 Yes272 (76.2)85 (23.8)
 No556 (82.2)120 (17.8)0.72 (0.54-0.98).035
Asthma controlledb
 Yes342 (83.8)66 (16.2)
 No478 (77.9)136 (22.1)1.46 (1.04-2.04).029
Time of visitc
 After regular hours17 (56.7)13 (43.3)
 During regular hours157 (67.7)75 (32.3)0.69 (0.29-1.63).40

Values are n (%) or mean (range), unless otherwise indicated.

CCDSS: computerized clinical decision support system; CI: confidence interval; ED: emergency department; MRP: most responsible physician.

Includes physicians only.

Asthma control status was missing in 11 visits.

Includes site 3 only.

Table 2.

Univariable predictors of clinicians producing an AAP

No AAP made (n = 912 visits)AAP made (n = 121 visits)Odds ratio (95% CI) P-value
Primary care clinic
 Site 1582 (94.5)34 (5.5)
 Site 2132 (85.2)23 (14.8)3.09 (1.67-5.72)<.001
 Site 3198 (75.6)64 (24.4)5.68 (3.51-9.18)<.001
Appointment provider type
 Physician445 (87.4)64 (12.6)
 Nurse practitioner67 (76.1)21 (23.9)2.08 (1.16-3.70).013
 Physician assistant41 (82.0)9 (18.0)1.45 (0.62-3.41).39
 Resident359 (93.0)27 (7.0)0.55 (0.35-0.87).01
Objective diagnosis of asthma
 Yes137 (86.7)21 (13.3)
 No775 (88.6)100 (11.4)0.82 (0.48-1.39).46
Physician diagnosis of asthma
 Yes659 (85.7)110 (14.3)
 No253 (95.8)11 (4.2)0.26 (0.13-0.50)<.001
Presenting complaint
 Nonrespiratory772 (92.9)59 (7.1)
 Asthma46 (66.7)23 (33.3)6.00 (3.11-11.60)<.001
 Respiratory (nonasthma)94 (70.7)39 (29.3)5.17 (3.15-8.49)<.001
Years in practicea13 (2-33)16 (2-52)0.99 (0.96-1.01).123
Appointment provider sexa
 Female305 (86.4)48 (13.6)
 Male140 (89.7)16 (10.3)0.72 (0.39-1.36).32
Previous ED visit or hospitalization for asthma
 Yes69 (89.6)8 (10.4)
 No843 (88.2)113 (11.8)1.00 (0.44-2.22).996
Seen by the MRP
 Yes310 (86.8)47 (13.2)
 No602 (89.1)74 (10.9)0.83 (0.56-1.20).31
Asthma controlledb
 Yes377 (92.4)31 (7.6)
 No524 (85.3)90 (14.7)1.98 (1.25-3.16).004
Time of visitc
 After regular hours26 (86.7)4 (13.3)
 During regular hours172 (74.1)60 (25.9)2.23 (0.75-6.67).15

Values are n (%) or mean (range), unless otherwise indicated.

AAP: asthma action plan; CI: confidence interval; ED: emergency department; MRP: most responsible physician.

Includes physicians only.

Asthma control status was missing in 11 visits.

Includes site 3 only.

Univariable predictors of clinicians opening the CCDSS Values are n (%) or mean (range), unless otherwise indicated. CCDSS: computerized clinical decision support system; CI: confidence interval; ED: emergency department; MRP: most responsible physician. Includes physicians only. Asthma control status was missing in 11 visits. Includes site 3 only. Univariable predictors of clinicians producing an AAP Values are n (%) or mean (range), unless otherwise indicated. AAP: asthma action plan; CI: confidence interval; ED: emergency department; MRP: most responsible physician. Includes physicians only. Asthma control status was missing in 11 visits. Includes site 3 only.

Clinician feedback

We received feedback questionnaires from 12 of 19 (63.2%) consenting clinicians. The SUS score was 66.3 ± 16.5 (maximum score 100). Likert-type scale question responses are presented in Figure 3.
Figure 3.

Clinician user feedback on the Electronic Asthma Management System(eAMS) (n = 12). aOnly 11 users provided a response to this question. AAP: asthma action plan.

Clinician user feedback on the Electronic Asthma Management System(eAMS) (n = 12). aOnly 11 users provided a response to this question. AAP: asthma action plan. In open-ended questions, the most useful reported features included medication images for patients in the questionnaire, integration of CCDSS notifications within the EMR, the opportunity to clarify medication adherence with patients, medication-related decision support, and the auto-generated AAP. Clinicians also reported that the system enabled them to identify patients with previously unrecognized poor asthma control (8 of 12 [66.7%] clinicians) and nonadherence to asthma medications (11 of 12 [91.7%] clinicians). Reported barriers to system use included inability to access the system for newly diagnosed patients, process issues related to the patient questionnaire (tablets running out of power, failure of clinic staff to provide patients with the questionnaire, insufficient tablet devices for patients in clinic), futility of CCDSS recommendations in patients who could not afford controller medications, and insufficient time to address the CCDSS (particularly if patients presented with nonrespiratory complaints that needed attention). Only 5 of 12 (41.7%) physicians reported scheduling an appointment specifically to address CCDSS recommendations.

DISCUSSION

Our complex asthma CCDSS was opened in only 20% of visits, with the final step, AAP approval, completed in only 12% of visits. We identified predictors of system usage at the clinic, patient, and visit levels. CCDSSs are considered a promising strategy to improve guideline adherence and care quality by bridging evidence-to-practice gaps., However, impacts on clinician behavior change have generally been modest, with poor system uptake being a major impediment to realizing purported benefits, particularly in primary care., Reported uptake of CCDSSs addressing asthma has been similarly disappointing. For example, Kuilboer et al reported physician uptake of an asthma management critiquing system in only 7.3% of visits and Tamblyn et al reported physicians accessing a comprehensive asthma CCDSS in only 10.3% of visits. Authors of a systematic review of asthma CCDSSs concluded that “current CCDSSs are unlikely to result in improved outcomes in asthma because they are rarely used.” Our multivariable model identified 3 predictors of both initial system access and completion. The first was clinic site. Although implementation approach and system functionality were identical across clinics, variation in clinician workflows, priorities, and system perceptions may have influenced uptake. Baseline variations in asthma care quality were also demonstrated across these clinics, likely for similar reasons. A documented clinician diagnosis of asthma in the chart also predicted higher system use. This is unsurprising, as clinicians likely documented asthma in cases that were more active or “clinically relevant.” Finally, usage was higher when patients presented with respiratory complaints, and particularly when asthma was the primary complaint. Again, this is unsurprising since such patients would receive immediate benefit from decision support and respiratory issues were central to the visit rather than an additional burden. This finding is compatible with that of Tamblyn et al, whose asthma CCDSS was accessed in 39.5% of visits for “out-of-control” asthma, compared with 5.3% of visits for “in-control” asthma. Several authors have evaluated factors predicting CCDSS success. Contextualizing our findings, including quantitative predictors and user feedback, into an established framework for CCDSS success can provide further insight into possible causes of suboptimal usage, and corresponding solutions. The recently released GUIDES checklist was designed to optimize chances of successful CCDSS implementation, developed through a comprehensive systematic review of the literature and an expert panel review and user consultation. This tool divides key components into: an enabling context; appropriate content; an effective system; and effective implementation. Although the eAMS fulfills most GUIDES elements (Supplementary Table 1), it is worth considering how our findings correlate with checklist elements, particularly focusing on areas that were lacking. This can provide guideposts for future system improvements, and the process itself may serve as a useful example for other eHealth tool developers.

Context

One contextual factor in the GUIDES checklist is whether stakeholders and users accept the CCDSS. This requires a “clear benefit to the users.” This also aligns with the HOT-fit (Human, Organization, and Technology–Fit) framework for evaluating health information systems, which considers the interconnected elements pertaining to human (users), organization (healthcare environment), and technology factors that ultimately lead to net benefits or overall impact of the technology. In particular, a user-perceived benefit aligns with “perceived usefulness” (a “user satisfaction” element within the human factor) and “relevance and usefulness” (“information quality” elements within the technology factor), which contribute to system success. Our system automatically calculated and displayed asthma control, provided medication change recommendations, and auto-generated an AAP, all of which are considered fundamental asthma management practices, directly improve patient-level outcomes, and have been shown to be poorly executed across settings, including our study sites. However, evidence and guideline recommendations do not necessarily equate with clinician perception of valuable care. Although decision support was easy to understand (Figure 3), a third to half of respondents did not agree with statements about the usefulness of system outputs: asthma control guidance, medication recommendations, and the AAP (Figure 3). Although current gaps in, and evidence for, the impact of these practices were mentioned in system launch presentations, this area will require more emphasis to drive user acceptance. Furthermore, as capitated funding models provide a relatively fixed base fee per patient regardless of the number of visits, and AAPs reduce unscheduled outpatient visits, it may be advantageous to explicate this possible financial advantage of system use.,, An additional benefit could be to provide continuing professional development credits for system use. Broader sociotechnical factors are also important contextual influencers of uptake. For example, in an interdisciplinary review that leveraged lessons from outside of health care (basic sciences, social sciences, humanities, engineering, business, and defense) to identify strategies for successful CCDSS implementation, Wu et al identified the broad importance of organizational culture on implementation success. These organizational and cultural factors are also noted in Sittig and Singh’s model describing sociotechnical factors affecting health information technologies, which also includes, but is not limited to, “human computer interface” elements that consider users’ comfort with the hardware and software, and “people” elements ranging from decisions on where to place hardware to user familiarity with the technology. Baxter and Sommerville’s analysis of sociotechnical approaches to system development argues that sociotechnical factors may also exist between users and eHealth developers. They suggested that transparency and user education about the construction and limitations of system design may improve user perception and appreciation of the available technology. Overall, these perspectives provide valuable opportunities for improving the “context” in which a CCDSS such as ours is deployed, thereby driving uptake. An important component of understanding and adapting to the user’s context is understanding the user’s workflow. Given the complexity of any clinical environment, a formal workflow analysis would likely have been beneficial, as was suggested by Ross et al in their review of factors influencing eHealth implementation, which found that eHealth systems with a good perceived fit into preexisting workflows tend to see higher use. Accordingly, future rollouts of the eAMS should be preceded by a workflow analysis, ideally through a contextual design approach that involves interviewing and assessing users while they perform work-related tasks and using these findings to create a work model that integrates into the user’s and organization’s workflow model.

Content

For content to be trustworthy, “advice must be supported by up-to-date scientific evidence.” Indeed, our system utilized a validated patient questionnaire,, medication logic rules built through a review of existing international asthma guidelines, AAP population rules developed through a systematic review, and an AAP developed through a systematic analysis of existing AAPs and a multistakeholder collaborative editing process including usability optimization., However, the GUIDES checklist also notes that the “type and quality of this evidence” must be clear to users. Although we indicated that all recommendations were based on latest guidelines and were transparent when there was insufficient evidence to provide decision support, we did not elaborate on how decision support algorithms were developed and validated, nor did we provide direct links to relevant guideline sections or supporting evidence. We noted that clinicians disagreed with recommendations for baseline medication changes (screen 2) in about 30% of cases. Although other factors such as patient adherence or medication affordability may have played a role, our failure to provide links to evidence may have reduced clinician trust in the eAMS guidance. Such links can easily be added to enhance system content. Furthermore, we failed to explicitly state expected patient-level benefits and possible harms of inhaled corticosteroid medications in clinician guidance, whereas clear statements of “benefits and harms of different management options” is another GUIDES determinant of trustworthiness. This could be addressed by integrating an existing asthma medication patient decision aid within the system, which would also address patient-level adherence barriers. Another important content-related factor is whether the “amount of decision support [is] manageable.” Although our system featured only 5 screens (requiring as few as 4 clicks to view and accept all decision support elements) and an automated chart note for documentation, a third of users indicated that the decision support process was too time consuming (Figure 3). Indeed, even when CCDSSs improve care, time constraints remain a barrier.,, We attempted to address clinician time constraints by collecting data directly from patients, before the physician interaction. However, considering that the CCDSS was opened after the patient appointment in 34% of cases, even clinicians who valued the guidance often lacked the time to complete the process during the patient appointment. Given challenges with contacting patients for therapeutic changes or AAP provision by telephone, it is unlikely that most patients derived a timely benefit from clinician CCDSS activities occurring after their visit. This suggests a need to increase the perceived value for time investment, which will require a stronger case for the intervention (as previous), a reduced time commitment, or both.

System

Ease of system use is a key determinant of uptake. It should be “easy for users to interact with the [CCDSS] system,” which can be facilitated if the system can be “customised to provide better user support.” Other reviews have also supported user-customization options to improve decision support performance., Correspondingly, the system must also be delivered with an “eye-catching, intuitive, concise, consistent and unambiguous display.” Questionnaire data (Figure 3) suggest positive overall user attitudes towards our system, including feedback that the system was easily accessible from the EMR. However, the SUS score of 66 suggests potential for usability optimization. The SUS measures a system’s effectiveness (ability to complete tasks), efficiency, and user satisfaction., Our score was just below a mean score of 68 across 500 web-based systems (percentile rank of approximately 45%), suggesting that a majority of other reported systems had a higher perceived usability. This score corresponds to an adjective rating between “OK” and “good” and represents a marginal overall acceptability rating. Furthermore, the large standard deviation in SUS ratings (16.5) suggests variable user preferences, supporting a need to develop customizability features to drive perceived usability. For example, temporal frequency of system notifications and email reminders could be individualized according to clinician preferences. In analyzing user workflows through our system (Figure 2), there was gradual attrition at each screen, with no clear drop-off at any particular screen to suggest a screen-specific technical, workflow, comprehension, or agreement issue. Although we engaged end users in system design, a detailed usability study with optimization was likely required. Our approach entailed a collaborative and iterative CDSS content and functionality development with 3 primary care site leads, but may have benefitted from both a broader sampling and a more systematic feedback approach to elicit user needs. This could have been achieved through a rapid-cycle design process with serial clinician focus groups, as was undertaken for the patient-facing questionnaire. Furthermore, although we sought informal feedback on barriers and enablers to system use in this study, it would have been more valuable to design the intervention trial as a mixed-methods study with formal, periodic clinician user interviews soliciting feedback on content, design, and workflow; serial system adjustments in response to this feedback; and a final summative qualitative evaluation of user preferences. Another system criterion is “reaching the targeted users.” Our system was unique in that once a clinician was recruited, all patients for whom they were the MRP had eAMS functions embedded in their charts. Accordingly, the CCDSS was available to any clinician who saw an included patient. The advantage of this real-world approach is that it measures outcomes in all users rather than in a biased subset of motivated users. Although the potential disadvantage is that nonrecruited clinicians (with less buy-in) contributed to system uptake and outcome data, being seen by the MRP did not significantly predict uptake in quantitative models (Tables 1 and 2). A system should also provide decision support “at the right time” or “moment of need.” This aligns with findings in Kawamoto et al’s review of determinants of successful CCDSSs, and is one of Bates et al’s “ten commandments” for CDSS design. Guidelines recommend assessment of asthma control at each clinical visit; medication adjustments must be made according to current control status; and a new AAP must accompany any medication adjustments. Accordingly, the eAMS was designed to provide these decision support elements to patients with asthma at each clinical visit, regardless of the reason for the visit. However, clinicians were much more likely to access and complete the CCDSS during visits with respiratory, and particularly asthma-related complaints (Tables 1 and 2). Accordingly, some clinicians may have considered the “moment of need” to be limited to visits during which respiratory-related care decisions were required. In our outcomes analysis, only about a third of patients presented with a respiratory complaint at least once during the intervention period, suggesting that a strategy to limit provision of decision support to respiratory-related visits would severely limit the scope and impact of the intervention. A possible alternative would be to encourage clinicians to book additional visits exclusively for CCDSS actions. Authors have also advocated for specific design approaches to drive uptake. In a seminal meta-regression of 162 randomized trials of CCDSSs, Roshanov et al found that systems that featured a built-in requirement for clinicians to provide a reason for ignoring or over-riding advice were more likely to succeed than systems without this. Our eAMS system notifications could be adjusted to require reasoning for dismissals, but given user frustrations with a growing number of pop-ups and alerts, design and operationalization of any such functionality would require careful input from end-users.

Implementation

Successful CCDSS implementation requires efforts to identify and address “other barriers and facilitators to compliance with the decision support advice.” Here, the GUIDES checklist lists the following: assessing and addressing user “beliefs, attitudes and skills,” “professional interactions,” possible “(dis)incentives,” “capacity and resource” issues, and “organisational context” influencing system uptake and adherence. These behavioral predictors are represented in the Theoretical Domains Framework, an integrative framework of 14 theoretical domains derived from 33 validated health and social psychology theories and 128 constructs explaining health-related behavior change., Accordingly, we have completed and are now analyzing user interviews to identify which Theoretical Domains Framework domains are relevant for questionnaire completion (patients), and CCDSS usage (clinicians). After this, we plan to map any modifiable barriers and enablers to corresponding evidence-based behavior change techniques (based on an existing Behaviour Change Matrix), and to redesign the eAMS, and its implementation, accordingly.

Additional findings

Two other findings are noteworthy. First, we attempted to include a patient-mediated knowledge translation strategy, a questionnaire-embedded prompt for patients to ask clinicians for an AAP. Few studies have assessed the impact of patient prompts on clinician behavior; whether and how such a strategy could drive CCDSS use requires further study. Second, although our analysis prioritized CCDSS usage, we also found that once clinicians exited the CCDSS, they completed new prescriptions required, as per CCDSS actions, in less than half of cases. This worrisome finding suggests that coupling automated decision support with actions requiring additional time and conventional manual workflows creates risk and should be avoided. We will seek to automate CCDSS prescriptions in the next eAMS iteration.

Strengths and limitations

Our study’s strengths include a detailed characterization of user interactions with the CCDSS (captured directly in a system log), a real-world sample including diverse health practitioner types in academic and community settings, and our novel quantitative models determining predictors of CCDSS usage. We also note that despite its suboptimal uptake, the eAMS did significantly improve real-world asthma care in our interrupted time series analysis (30.5% absolute increase in physician visits in which an AAP was delivered [P < .0001]; and an adjusted OR of 8.62 [95% CI, 5.14-12.45] for assessment of asthma control level during the clinical visit.) Accordingly, conclusions reached herein regarding strategies to drive uptake would be expected to have a potent effect on the system’s clinical impact. Weaknesses include that user feedback was received from only 63% of consented users, introducing the possibility of feedback sampling bias. We were also unable to solicit feedback from other (nonstudy) clinicians who used the system. These small numbers also prevented us from linking individual feedback questionnaire responses to individual behavioral data, which would have facilitated a better understanding of barriers and enablers of system usage.

CONCLUSION

In conclusion, we present usage metrics from a study of the eAMS CCDSS, along with quantitative predictors of uptake and user feedback data, which we triangulated with established predictors of CCDSS success in the GUIDES checklist. This exercise enabled us to determine the areas of highest yield for intervention improvement, and we believe that both this technique and the specific lessons learned are broadly relevant to the design and implementation of CCDSSs across diseases. As CCDSSs become ubiquitous, such efforts to improve system uptake will be required to realize their full impact on quality of care and patient outcomes.

FUNDING

This work was supported by the Keenan Research Summer Student Program, the Michael Locke Chair in Knowledge Translation and Rare Lung Disease Research, the Canadian Respiratory Research Network, and the Canadian Institutes of Health Research. Funders had no role in the design of the study, nor in the collection, analysis, or interpretation of data, nor in manuscript preparation.

AUTHOR CONTRIBUTIONS

JLSC prepared and analyzed user uptake and feedback data, prepared the figures and tables, and was a major contributor in manuscript preparation. NP analyzed user uptake and feedback data and contributed to reviewing the manuscript. CP prepared and analyzed the user uptake data and contributed to reviewing the manuscript. JS performed the statistical analyses and contributed to reviewing the manuscript. SG conceived the study and the analyses, provided oversight and statistical analysis review, and was a major contributor in manuscript preparation and review. Each author read and approved the final manuscript.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online. Click here for additional data file.
  50 in total

Review 1.  Effects of computerized clinical decision support systems on practitioner performance and patient outcomes: a systematic review.

Authors:  Amit X Garg; Neill K J Adhikari; Heather McDonald; M Patricia Rosas-Arellano; P J Devereaux; Joseph Beyene; Justina Sam; R Brian Haynes
Journal:  JAMA       Date:  2005-03-09       Impact factor: 56.272

2.  Overriding of drug safety alerts in computerized physician order entry.

Authors:  Heleen van der Sijs; Jos Aarts; Arnold Vulto; Marc Berg
Journal:  J Am Med Inform Assoc       Date:  2005-12-15       Impact factor: 4.497

3.  Identifying patients with asthma in primary care electronic medical record systems Chart analysis-based electronic algorithm validation study.

Authors:  Nancy Xi; Rebecca Wallace; Gina Agarwal; David Chan; Andrea Gershon; Samir Gupta
Journal:  Can Fam Physician       Date:  2015-10       Impact factor: 3.275

4.  Features of effective computerised clinical decision support systems: meta-regression of 162 randomised trials.

Authors:  Pavel S Roshanov; Natasha Fernandes; Jeff M Wilczynski; Brian J Hemens; John J You; Steven M Handler; Robby Nieuwlaat; Nathan M Souza; Joseph Beyene; Harriette G C Van Spall; Amit X Garg; R Brian Haynes
Journal:  BMJ       Date:  2013-02-14

Review 5.  Major care gaps in asthma, sleep and chronic obstructive pulmonary disease: a road map for knowledge translation.

Authors:  Louis-Philippe Boulet; Jean Bourbeau; Robert Skomro; Samir Gupta
Journal:  Can Respir J       Date:  2013 Jul-Aug       Impact factor: 2.409

6.  Unmet needs in asthma: Global Asthma Physician and Patient (GAPP) Survey: global adult findings.

Authors:  G W Canonica; C E Baena-Cagnani; M S Blaiss; R Dahl; M A Kaliner; E J Valovirta
Journal:  Allergy       Date:  2007-06       Impact factor: 13.146

7.  Patient-reported side effects, concerns and adherence to corticosteroid treatment for asthma, and comparison with physician estimates of side-effect prevalence: a UK-wide, cross-sectional study.

Authors:  Vanessa Cooper; Leanne Metcalf; Jenny Versnel; Jane Upton; Samantha Walker; Rob Horne
Journal:  NPJ Prim Care Respir Med       Date:  2015-07-09       Impact factor: 2.871

Review 8.  Clinical decision support: effectiveness in improving quality processes and clinical outcomes and factors that may influence success.

Authors:  Elizabeth V Murphy
Journal:  Yale J Biol Med       Date:  2014-06-06

Review 9.  Computer decision support systems for asthma: a systematic review.

Authors:  Patricia Matui; Jeremy C Wyatt; Hilary Pinnock; Aziz Sheikh; Susannah McLean
Journal:  NPJ Prim Care Respir Med       Date:  2014-05-20       Impact factor: 2.871

Review 10.  Factors that influence the implementation of e-health: a systematic review of systematic reviews (an update).

Authors:  Jamie Ross; Fiona Stevenson; Rosa Lau; Elizabeth Murray
Journal:  Implement Sci       Date:  2016-10-26       Impact factor: 7.327

View more
  5 in total

Review 1.  Artificial intelligence and the hunt for immunological disorders.

Authors:  Nicholas L Rider; Renganathan Srinivasan; Paneez Khoury
Journal:  Curr Opin Allergy Clin Immunol       Date:  2020-12

2.  Human-centered design of clinical decision support for management of hypertension with chronic kidney disease.

Authors:  Pamela M Garabedian; Michael P Gannon; Skye Aaron; Edward Wu; Zoe Burns; Lipika Samal
Journal:  BMC Med Inform Decis Mak       Date:  2022-08-13       Impact factor: 3.298

3.  Towards effective clinical decision support systems: A systematic review.

Authors:  Francini Hak; Tiago Guimarães; Manuel Santos
Journal:  PLoS One       Date:  2022-08-15       Impact factor: 3.752

4.  User Experience Design for Adoption of Asthma Clinical Decision Support Tools.

Authors:  Emily Gao; Ilana Radparvar; Holly Dieu; Mindy K Ross
Journal:  Appl Clin Inform       Date:  2022-10-12       Impact factor: 2.762

Review 5.  Do providers use computerized clinical decision support systems? A systematic review and meta-regression of clinical decision support uptake.

Authors:  Andrew Kouri; Janet Yamada; Jeffrey Lam Shin Cheung; Stijn Van de Velde; Samir Gupta
Journal:  Implement Sci       Date:  2022-03-10       Impact factor: 7.327

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.