OBJECTIVE: To develop and apply a systematic approach to identify and define valid, relevant, and feasible measures of emergency department (ED) clinical performance. METHODS: An extensive literature review was conducted to identify clinical conditions frequently treated in most EDs, and clinically relevant outcomes to evaluate these conditions. Based on this review, a set of condition-outcome pairs was defined. An expert panel was convened and a Modified-Delphi process was used to identify specific condition-outcome pairs where the panel felt there was a link between quality of care for the condition and a specific outcome. Next, for highly rated condition-outcome pairs, specific measurable indicators were identified in the literature. The panelists rated these indicators on their relevance to ED performance and need for risk adjustment. The feasibility of calculating these indicators was determined by applying them to a routinely collected data set. RESULTS: Thirteen clinical conditions and eight quality-of-care outcomes (mortality, morbidity, admissions, recurrent visits, follow-up with primary care, length of stay, diagnostics, and resource use) were identified from the literature (104 pairs). The panel selected 21 condition-outcome pairs, representing eight of 13 clinical conditions. Then, the panel selected 29 specific clinical indicators, representing the condition-outcome pairs, to measure ED performance. It was possible to calculate eight of these indicators, covering five clinical conditions, using a routinely collected data set. CONCLUSIONS: Using a Modified-Delphi process, it was possible to identify a series of condition-outcome pairs that panelists felt were potentially related to ED quality of care, then define specific indicators for many of these condition-outcome pairs. Some indicators could be measured using an existing data set. The development of sound clinical performance indicators for the ED is possible, but the feasibility of measuring them will be dependent on the availability and accessibility of high-quality data.
OBJECTIVE: To develop and apply a systematic approach to identify and define valid, relevant, and feasible measures of emergency department (ED) clinical performance. METHODS: An extensive literature review was conducted to identify clinical conditions frequently treated in most EDs, and clinically relevant outcomes to evaluate these conditions. Based on this review, a set of condition-outcome pairs was defined. An expert panel was convened and a Modified-Delphi process was used to identify specific condition-outcome pairs where the panel felt there was a link between quality of care for the condition and a specific outcome. Next, for highly rated condition-outcome pairs, specific measurable indicators were identified in the literature. The panelists rated these indicators on their relevance to ED performance and need for risk adjustment. The feasibility of calculating these indicators was determined by applying them to a routinely collected data set. RESULTS: Thirteen clinical conditions and eight quality-of-care outcomes (mortality, morbidity, admissions, recurrent visits, follow-up with primary care, length of stay, diagnostics, and resource use) were identified from the literature (104 pairs). The panel selected 21 condition-outcome pairs, representing eight of 13 clinical conditions. Then, the panel selected 29 specific clinical indicators, representing the condition-outcome pairs, to measure ED performance. It was possible to calculate eight of these indicators, covering five clinical conditions, using a routinely collected data set. CONCLUSIONS: Using a Modified-Delphi process, it was possible to identify a series of condition-outcome pairs that panelists felt were potentially related to ED quality of care, then define specific indicators for many of these condition-outcome pairs. Some indicators could be measured using an existing data set. The development of sound clinical performance indicators for the ED is possible, but the feasibility of measuring them will be dependent on the availability and accessibility of high-quality data.
Authors: M Patrice Lindsay; Moira K Kapral; David Gladstone; Robert Holloway; Jack V Tu; Andreas Laupacis; Jeremy M Grimshaw Journal: CMAJ Date: 2005-02-01 Impact factor: 8.262
Authors: Chaim M Bell; Stacey S Brener; Rebecca Comrie; Geoffrey M Anderson; Susan E Bronskill Journal: Drugs Aging Date: 2012-04-01 Impact factor: 3.923
Authors: Renee Y Hsia; Steven M Asch; Robert E Weiss; David Zingmond; Gelareh Gabayan; Li-Jung Liang; Weijuan Han; Heather McCreath; Benjamin C Sun Journal: Med Care Date: 2013-11 Impact factor: 2.983
Authors: Richard T Griffey; Sarah K Kennedy; Lucy D'Agostino McGowan; Lucy McGownan; Melody Goodman; Kimberly A Kaphingst Journal: Acad Emerg Med Date: 2014-10 Impact factor: 3.451
Authors: Diem Tran; Linda McGillis Hall; Aileen Davis; Michel D Landry; Dawn Burnett; Katherine Berg; Susan Jaglal Journal: BMC Health Serv Res Date: 2008-12-09 Impact factor: 2.655