| Literature DB >> 35478929 |
Michael A Heenan1, Glen E Randall1, Jenna M Evans1.
Abstract
Objective: Health care organizations monitor hundreds of performance indicators. It is unclear what processes and criteria organizations use to identify the indicators they use, who is involved in these processes, how performance targets are set, and what the impacts of these processes are. The purpose of this study is to synthesize international approaches to indicator selection and develop a standardized process framework.Entities:
Keywords: hospitals; performance indicators; performance measurement; process framework; quality; targets
Year: 2022 PMID: 35478929 PMCID: PMC9038160 DOI: 10.2147/RMHP.S357561
Source DB: PubMed Journal: Risk Manag Healthc Policy ISSN: 1179-1594
Figure 1The flow of study identification and selection according to PRISMA-ScR guidelines.
Peer Reviewed and Grey Literature by Country
| Country | Peer Reviewed Literature | Grey Literature |
|---|---|---|
| Australia | 3 | 0 |
| Canada | 8 | 3 |
| France | 2 | 0 |
| Germany | 3 | 0 |
| Netherlands | 3 | 0 |
| New Zealand | 1 | 1 |
| Norway | 0 | 0 |
| Sweden | 0 | 0 |
| Switzerland | 1 | 0 |
| United Kingdom | 1 | 3 |
| United States | 11 | 4 |
| Total | 33 | 11 |
Peer Reviewed and Grey Literature by Field of Study
| Acute Care Clinical Area | Peer Reviewed Literature | Grey Literature |
|---|---|---|
| Cancer | 4 | 0 |
| Cardiology | 4 | 0 |
| Critical Care | 1 | 0 |
| Emergency Care | 2 | 0 |
| Geriatrics | 1 | 0 |
| Hospital or Health Systems | 6 | 11 |
| Infection Control | 4 | 0 |
| Maternity | 2 | 0 |
| Mental Health | 1 | 0 |
| Patient Safety | 2 | 0 |
| Pediatrics | 3 | 0 |
| Surgery | 3 | 0 |
| Total | 33 | 11 |
Scoping Review Peer-Reviewed Literature Summary
| Article Info | Indicators Addressed | Consensus Method Used | Article Summary | |||||
|---|---|---|---|---|---|---|---|---|
| First Author | Year | Jurisdiction | Field of Study | Clinical Quality | Business Based | Target Setting | ||
| Aktaa | 2020 | UK | Cardiology | Yes | No | No | Not Applicable | Paper proposes a 4-step process for KPI selection in cardiology, including identification of domains of care by constructing a conceptual framework; construction of candidate QIs via a systematic review of the literature; selection of a final set of QIs by obtaining expert opinions using the modified-Delphi method; and validation. Paper noted that expert panels have inherent bias. Therefore expansion of participants is important mitigation. |
| Bianchi | 2013 | Switzerland | Cancer | Yes | No | No | Modified-Delphi | Colorectal Cancer Quality Indicator (QI) selection process governed by an expert panel identified 27 QIs from an original list of 149. QIs were rated using a Likert Scale and within clinical categories that followed the care continuum. Validation of the final QI set of was led by an academic researcher. Noted limitation of physician only panel. Offers a template for indicator definition sheets. |
| Bramesfeld | 2015 | Germany | Infection Prevention and Control | Yes | No | No | Modified-Delphi | Study identified 32 indicators for measuring the prevention and management of Catheter Related Blood Stream Infections. Process considered relevance and feasibility criteria. Panelists participated in a pre-survey workshop. QIs were classified as process, outcome or structural. Likert scale was used to rate QIs. |
| Casey | 2013 | USA | Hospital System | Yes | No | No | Modified-Delphi | Paper summarizes a panel process that examined the relevance of nationally reportable indicators to rural hospitals. Process included an expert panel that voted on the indicators to give Rural hospitals direction on which indicators are best to be used and how they align to national indicator reporting. Categorized the indicators into clinical categories; voting was noted but scale not described. |
| Chrusch | 2016 | Canada | Critical Care | Yes | No | Yes | Nominal Group Technique | Paper describes a multiple case study in which conferences were held to have experts select indicators for comparing ICU performance. Organizations test indicators and report back on how they were used and the data results. Results identified 22 ICU indicators. Validation of indicators conducted. |
| Elliot | 2018 | Australia | Hospital System | Yes | Yes | No | Modified-Delphi | Paper describes a 5-step process used to systematically select 20 indicators to monitor hospital strategic plan. 725 indicators were narrowed down to 110 by staff. Executives selected 20 clinical and business indicators. Five phases: (1) identification of potential indicators; (2) consolidation into a pragmatic set; (3) analysis of potential indicators against criteria; (4) mapping indicators to strategic plan; (5) key stakeholder presentation |
| Emond | 2015 | Netherlands | Surgery | Yes | No | No | Modified-Delphi | Article describes a process that selected patient safety indicators in surgery. Process was governed by steering committee and expert panel of hospital leaders. 11 indicators were selected and validated in 8 hospitals. Patients and managers were on the panel. |
| Fekri | 2017 | Canada | Hospital System | Yes | No | No | Modified-Delphi | Paper describes process used to select a national set of indicators. Technical group narrowed first set of metrics via quantitative survey followed by a consensus conference of end-users. 37 of 56 indicators were selected. Process included clear guiding principles. |
| Goldfarb | 2018 | USA | Cardiology | Yes | No | No | Modified-Delphi | Systematic review of cardiology quality indicators was completed ahead of an international expert panel survey. Fifteen QIs were selected from an original list of 108, using a Likert scale. QIs were categorized as process, outcome or structural. Expert panel consisted of only physicians. |
| Grace | 2014 | Canada | Cardiology | Yes | No | No | Modified-Delphi | Study identified quality indicators in cardiac rehabilitation. Process has three stages including ratings by working groups and validation of final QIs by stakeholders. Process resulted in a final list of 5 QIs from a list of 37. Qualitative and quantitative validation of QIs was completed. |
| Gurvitz | 2013 | USA | Cardiology | Yes | No | No | Modified-Delphi | Paper describes indicators selection process aimed at monitoring quality improvement for adults with congenital heart disease (ACHD) conditions. Expert panel only included Physicians. 55 of 61 indicators were selected based on literature review and clinical guidelines. indicators were not independently validated. |
| Guth | 2016 | USA | Patient Safety | Yes | No | No | Kepner-Tregoe Decision Analysis | Case study report on process used to select indicators for a hospital quality scorecard. Governing committee and working groups, narrowed 750 indicators to 25. Process included metric collection; harm evaluation; metric viability; ability to implement; categorizing metrics; assess impact; and risk assessment. |
| Mangione-Smith | 2011 | USA | Pediatrics | Yes | No | Yes | Modified-Delphi | Paper summarizes a process that selected quality indicators for a health insurance program. Voting on a Likert scale resulted in 25 of 199 indicators being chosen. Noted field testing was needed to set targets. |
| Martinez | 2018 | USA | Hospital System | Yes | No | No | Participatory Design Approach | Article describes how a hospital prioritized metrics for an electronic dashboard. Resulted in 10 indicators mapped to the Donabedian framework of process, outcome, and structure. Process asked end-users about barriers to using indicators. Noted that different audiences need different indicators. |
| Mazzone | 2014 | USA | Cancer | Yes | No | No | Modified-Delphi | Panel of physicians selected Quality Indicators (QIs) to evaluate lung cancer processes of care. Narrowed original list of 18 QIs to 7. Assessed indicators using clearly defined criteria. Assessed indicators using defined criteria. Validity included testing QIs in 3 organizations. Paper noted bias of physician only panel. |
| Moehring | 2017 | USA | Infection Prevention and Control | Yes | No | No | Modified-Delphi | Study selected indicators to aid decision making in antimicrobial stewardship Programs. Process governed by a panel of physicians and pharmacists. Panel rated QIs against 4 questions versus defined criteria. 14 metrics were selected from an original list of 90 using a Likert scale. |
| Morris | 2012 | Canada | Infection Prevention and Control | Yes | No | No | Modified-Delphi | Paper describes process where expert panel rated potential indicators using a set of criteria. Panelists rated indicators on a Likert scale and could add anonymous comments. 4 indicators from an original list of 14 were selected. No patient or family member participated in process. |
| Perera | 2012 | New Zealand | Hospital System | Yes | No | Yes | Not Applicable | Paper describes indicator framework. Framework includes prioritization of indicators; delineation of intent; implementation requirements; development of indicator specifications; assessment of indicator purpose, and target development. Paper notes indicators for one purpose may be inappropriate for another. indicator credibility relies on having defined purpose. Targets need to be developed based on current performance and understanding of barriers to attaining targets. |
| Profit | 2011 | USA | Pediatrics | Yes | No | No | Modified-Delphi | Study selected indicators for neonatal intensive care units. Process resulted in 9 of 28 indicators aligned with IOM dimensions of quality using clear assessment criteria and indicator definitions. Expert panel did not include an administrator. |
| Reiter | 2011 | Germany | Hospital System | Yes | No | No | QUALIFY Instrument | Paper describes selecting hospital quality indicators deemed suitable for hospital disclosure. Working groups of clinicians and representatives selected 31 of 55 indicators for disclosure. |
| Sauvegrain | 2019 | France | Maternity | Yes | No | No | Delphi Survey | Paper describes process to select indicators for obstetrical care. Scientific committee and expert panel selected 13 indicators from a list of 28 that were derived from current database and literature review. Noted training ahead of process was not done but should be in future. Stated indicator targets should be discussed as an accompany process. Noted panel participants will have biases. |
| Schnitker | 2015 | Australia | Emergency | Yes | No | No | Modified-Delphi | Study selected process quality indicators (PQIs) to monitor Emergency Department patients with cognitive impairment. Approach included building a list of PQIs based on a literature review. Process resulted in in 11 PQIs being selected from original list of 22. Process field tested indicators for data quality ahead of final selection. Noted a panel of local experts have biases and recommend involving outside experts. |
| Schull | 2011 | Canada | Emergency | Yes | No | No | Modified-Delphi | Study selected national measures for Emergency Departments. Process resulted in selection of 48 of 170 candidate indicators. Categorized indicators by clinical domain. Noted when a panel is system-based it can underrepresent smaller and rural hospitals. |
| Science | 2019 | Canada | Infection Prevention and Control | Yes | No | No | Modified-Delphi | Study identified metrics for antimicrobial stewardship programs. Process was governed by a steering committee and expert panel. Process resulted in the selection of 4 metrics. Noted that bias in panels can be mitigated by neutral facilitator. |
| Soohoo | 2010 | USA | Surgery | Yes | No | Yes | Modified-Delphi | Study selected indicators for total joint replacement patients. Panel of orthopedic surgeons selected 68 indicators from an original list of 101. Field tested indicators for data quality and to inform the setting of targets. |
| Stang | 2013 | Canada | Pediatrics | Yes | No | Yes | Modified-Delphi | Study identified indicators for high acuity pediatric conditions. An interdisciplinary advisory group selected 62 indicators from a list of 97. Noted that field testing of final indicators can inform potential benchmarks and targets. |
| Stegbauer | 2017 | Germany | Mental Health | Yes | No | No | Modified-Delphi | Study selected indicators for schizophrenia. Expert panel narrowed 847 indicators to a list of 27 using 2 main criteria: relevance and schizophrenia. Indicator had to be defined in terms of matching an outcome (goal) and be tied to a treatment (process). Patients were on panel. |
| Thern | 2014 | Germany | Infection Prevention and Control | Yes | No | No | Modified-Delphi | Study selected 42 indicators from a list of 99. Process included surveying experts ahead of the development of an indicator list, a literature search, ranking of indicators using a Likert scale and an in-person conference. Stated that final list of indicators should be validated for data quality. |
| Tsiamis | 2018 | Australia | Cancer | Yes | No | No | Modified-Delphi | Physician panel selected indicators to monitor radiotherapy for men with prostate cancer. Process included literature review and categorizing QIs along the continuum of care. |
| van der Wees | 2019 | Netherlands | Patient Safety | Yes | No | No | User Based Design | Paper proposed a framework to select Patient Reported Outcomes Measures. Framework developed using a design approach based on user needs and was guided by a project team of experts and end-user representatives. |
| Van Grootven | 2018 | USA | Geriatrics | Yes | No | No | Delphi | Study selected indicators to evaluate in-hospital geriatric programs. 31 of 44 indicators were chosen using Likert scale against 2 criteria: appropriateness and feasibility. Panelists had at least 2 years of experience in geriatric medicine. Panel demographics balanced age and gender to ensure equity. |
| van Heurn | 2015 | Netherlands | Surgery | Yes | No | Yes | Modified-Delphi | Panel of surgeons selected 24 neonatal surgical indicators an original list of 220. Paper emphasized importance of validation data and having external experts review final list for link to best practice. Study stated indicators need validation to inform targets. |
| Wood | 2013 | Canada | Cancer | Yes | No | Yes | Modified-Delphi | Study selected indicators in renal cell carcinoma. Panel selected 23 indicators from an original list of 34 that were generated from a literature search and panel input. Categorization of indicators followed continuum of care. Noted physician only panel should include other professions. Noted indicator data should be tested to inform targets. |
Scoping Review Grey-Literature Summary
| Article Info | Indicator Type Addressed | Consensus Method Used | Article Summary | |||||
|---|---|---|---|---|---|---|---|---|
| First Author | Year | Jurisdiction | Field of Study | Clinical Quality | Business Based | Target Setting | ||
| Health Quality Ontario | 2016 | Canada | Hospital | Yes | No | No | Modified-Delphi | Agency aimed to reduce number of patient safety indicators. 11 indicators selected from original inventory of 180. Structured process included clear aim, guiding principles, literature search, voting using a Likert scale, and involved representation from clinical experts, sector representatives and patients. |
| CIHI | 2015 | Canada | System | Yes | No | No | Conference followed by Working Groups | Agency prioritized a national set of indicators. Document explains process of conference, criteria and post conference work that led to a manageable list. Broad representation but no patient or front-line manager. Had clear indicator assessment criteria. Conclusion noted requirement to validate indicators for data quality. |
| Ontario Hospital Association | 2019 | Canada | Hospital | Yes | No | No | Modified-Delphi | Process aimed to reduce amount of measurement. Criteria used included public accountability, system monitoring, local monitoring and indicator retirement. Over 500 indicators reduced to 156 with 144 indicators retired. Expert panel did not include patients or frontline staff but noted they were required in future. Noted targets needed but did not address directly. |
| Health Quality and Safety Commission New Zealand | 2012 | New Zealand | System | Yes | No | No | Modified-Delphi | Paper summarizes process used to select 17 indicators for public reporting and quality improvement. Process included a steering committee, advisory group, and a use of defined criteria. Panel included managers and patients. |
| The King’s Fund | 2010 | UK | System | Yes | No | Yes | Not Applicable | Paper provides guidance on measuring acute care quality. Key topics include defining measurement; identifying audiences and purposes of indicators; impact indicators and benchmarks have on staff; and steps to select indicators. Paper emphasizes indicators and targets will motivate or unintendedly harm users. As such, processes need to ensure data is tailored to right audience. |
| National Institute for Health and Care Excellence | 2019 | UK | System | Yes | No | No | Modified-Delphi | Document describes how national system indicators were selected and how indicators are to be used. Document shares the principles and aims of indicator selection, committee structures, testing of indicators, and consultation with stakeholders. Validation included qualitative feedback from end-users. Process involved managers and public. Emphasizes regular review required for acceptability. |
| The Health Foundation | 2019 | UK | System | Yes | No | No | Qualitative Interviews | Multiple-case study interviewed unit-level staff on how best to reduce indicators to manageable number to enable improvement. Categorized indicators into Donabedian framework and patient reported outcome and experience measures. Assessment criteria included indicators being easily understood, relevant to area, and actionable. |
| Hospital Association of New York State | 2016 | USA | Hospital | Yes | No | Yes | Not Applicable | Discussion paper proposes indicator selection process. Processes should aim to have indicators match clinical reality and allow improvement; include assessment criteria; use ranking methodologies; and validate indicators for data quality. Report suggests indicator assessment criteria should include fit with priorities; performance history; relevance; actionability; and financial impact. |
| National Quality Forum | 2019 | USA | System | Yes | No | No | Modified-Delphi Process | Guide explains governance model, process and criteria used to select national indicators. Process included interdisciplinary membership, feedback from stakeholders ahead of and during process and clear assessment criteria. Indicators categorized using Donabedian framework of structure, process, and outcomes. |
| National Quality Forum | 2020 | USA | System | Yes | Yes | No | Not Applicable | Paper discusses work of committee that examined definitions, best practices, data issues and impact of measurement. Paper offers a four-step process to assess and select indicators and noted costs and efficiency indicators should be considered. Paper stated processes should include education on how to use indicators. |
| Institute of Medicine | 2015 | USA | System | Yes | No | Yes | Modified-Delphi Process | Paper proposes 15 indicators that measure health outcomes while reducing burden of measurement on clinicians and enhancing transparency and comparability. Report provides an overview of process followed, including criteria set used. Calls on system to test indicators for both statistical and face validity. |
The 5-P Indicator Selection Process Framework
| Domain | Elements | Element Description |
|---|---|---|
| Purpose | Clarify Aim | Articulate the rationale for conducting an indicator and target selection exercise. By stating the process aim, whether it is to align indicators to an operational process, a strategic plan, a regulatory requirement, or public reporting, the work can be scoped properly. |
| Develop Guiding Principles | Establish principles to ensure participants understand the values by which the process is being conducted. Principles may include openness, transparency, scientific soundness, relevance, accountability, scope, and span of control. | |
| Identify Level of Use | Identify the organizational unit that will use the indicators to ensure relevancy to end-users. As an example, indicators used by a board to monitor quality outcomes may be different than indicators selected by a clinical unit focused on process improvement. | |
| Polity | Build Governance Structures | Identify a structure that will manage indicator and target selection to ensure it is completed. These structures may include a steering committee, a project management team, a data quality advisory group, and an expert panel that will assess potential indicators and targets. |
| Recruit Participants | Select and recruit expert panel members. Panels should be diverse and multi-disciplinary to ensure equity and a broad view of how indicators and targets will be used. Composition of panels should consider the process aim and level of use when selecting participants. | |
| Prepare | Seek End-User Input | Seek input from end-users to understand their experiences with the potential indicators under consideration and solicit ideas on the draft criteria they may recommend in evaluating indicators. |
| Research Evidence-Based Literature | Identify the range of indicators used in their area or that are required by regulation. A search of literature and evidence-based guidelines, and government mandated indicators will help organizations identify a comprehensive set of indicators to assess. | |
| Build an Inventory of Potential Indicators | Compile a comprehensive list of indicators with definitions and data sources, so participants understand each indicator to be evaluated. If the process addresses target selection, the nature of the target (eg, past performance, benchmark, best practice) should be explained. | |
| Categorize Potential Indicators into Strategic Themes | Categorize indicators into themes aligned with the organization’s strategy, quadrants of the balanced scorecard, or the Donabedian framework of outcomes, process, and structure. By creating categories, process participants and end-users will better understand the linkage an indicator has with the identified purpose. | |
| Orient and Train Participants | Provide participants with orientation materials on the process aim, definition and purpose of each indicator, potential targets, and methods they will use to recommend indicators and targets. | |
| Procedure | Utilize a Consensus Building Method | Identify and use a recognized consensus building method such as the Delphi, modified-Delphi, or Normative Group Technique. This is particularly important when indicators are being identified to measure a new strategy compared to a quality improvement project. |
| Identify a Facilitator | Select an independent facilitator so as to not bias the process. The facilitator should be a third-party, or a neutral party from an organization’s performance measurement department. | |
| Establish Indicator Selection Criteria | Set criteria by which the assessment of indicators will be based. Common criteria include those prescribed by the Appraisal of Indicators through Research and Evaluation (AIRE) tool such as relevance, scientific soundness, feasibility, and validity. Criteria may change based on the aim statement and level of use described in the “Purpose” Domain. | |
| Analytically Assess Indicators | Identify a Likert assessment scale participants will use to evaluate indicators against criteria, and how assessments will be completed, either via survey, in person, or both. | |
| Set Indicator Targets | Assign a target for each indicator. Considerations may include maintaining performance if the current indicators result is ahead of a benchmark, attempting to reach a benchmark if performance is behind ideal performance, or making progress towards the benchmark should it be deemed unattainable within the period in which the indicator is being measured. | |
| Prove | Assess Data Quality | Validate the final list of indicators by testing data quality. Processes may wish to defer the setting of specific indicator targets until after this phase to ensure targets are based on valid data trends. |
| Validate with End-Users | Seek feedback from end-users on the relevance the final set of indicators and targets have to their environment and performance requirements, and whether the identified target motivates the end-user to implement improvement actions. |