| Literature DB >> 33963072 |
Erica Barbazza1, Niek S Klazinga2, Dionne S Kringos2.
Abstract
BACKGROUND: This study explores the meaning of actionable healthcare performance indicators for quality of care-related decisions. To do so, we analyse the constructs of fitness for purpose and fitness for use across healthcare systems and in practice based on the literature, expert opinion and user experience.Entities:
Keywords: health services research; healthcare quality improvement; management; performance measures; quality measurement
Mesh:
Year: 2021 PMID: 33963072 PMCID: PMC8606459 DOI: 10.1136/bmjqs-2020-011247
Source DB: PubMed Journal: BMJ Qual Saf ISSN: 2044-5415 Impact factor: 7.035
Figure 1Decision-making contexts across healthcare systems.
Characteristics of panellists
| Expert panel | n (%) | User panel | n (%) |
| Total | 16 (–) | 16 (–) | |
| Affiliation* | Uses | ||
| Academia | 10 (63) | Macro | 7 (44) |
| International organisation | 3 (19) | Meso | 4 (25) |
| Think tank | 3 (19) | Micro | 3 (19) |
| Expertise | Organisation type | ||
| Measurement | 5 (31) | Government | 5 (31) |
| Quality of care | 3 (19) | Health services | 4 (25) |
| Governance | 3 (19) | Standards | 3 (19) |
| Information systems | 3 (19) | Research | 2 (13) |
| Management | 2 (13) | Improvement | 2 (13) |
| Region | |||
| Europe | 9 (56) | 9 (56) | |
| North America | 5 (31) | 7 (44) | |
| Oceania | 2 (13) | – | |
| Sex | |||
| Male | 11 (69) | 9 (56) | |
| Female | 5 (31) | 7 (44) |
*Primary affiliations.
Figure 2Summary of key findings across study phases. Note: boxes denote key themes emerging by study phase. Broken lines denote a change in level. Solid lines denote agreement between phases with possible adjustments to phrasing. Darker grey shading denotes the introduction of new elements. Ordering within cells is not indicative of importance.
Differentiating uses of healthcare performance indicators across healthcare systems
| Context | Purpose of use | Illustrative uses | Illustrative users | Illustrative information need |
| Macro | System performance monitoring. | Signalling the performance of the system as a whole; comparing performance internationally; publicly reporting system performance. | Public; ministry of health; regional (provincial, state) authorities; health service executive (authority). | How is my healthcare system doing? |
| Strategy development. | Setting health policy priorities; identifying emerging health priority areas; and monitoring trends in current priority areas. | Government and ministries; regional (provincial, state) authorities; accountable care organisations; health maintenance organisations. | Have I chosen the right areas to prioritise? | |
| System quality assurance. | Measuring care processes; reporting of incidents and never events. | Quality inspectorate; national quality observatory; health and safety executive. | Is care being delivered as intended? | |
| Meso | Regulation (professional, facility, pharmaceuticals). | Informing accreditation, certification and/or licensing processes. | Medical councils, chambers, college of physicians; medicines and healthcare products regulatory agency. | Does the performance of organisations, facilities, medicines, etc, meet established standards? |
| Professional development. | Reporting internally and benchmarking within profession or specialty. | Societies of medical professionals; professional associations; training institutions. | How do healthcare professionals of a specific specialty perform? | |
| Quality-based financing. | Issuing performance-based payment (pay-for-performance); value-based contracting. | Healthcare insurers; healthcare providers. | Are existing guidelines or standards being adhered to? | |
| Organisation/network performance improvement. | Improving performance of hospitals, networks and care groups; assessing local needs and geographical differences. | Hospital management; integrated care; networks/groups; local collaboratives of care. | Are affiliated practices/facilities performing optimally? | |
| Micro | Practice or team performance improvement. | Convening audit and feedback, plan-do-study-act, and/or collaborative, team-based improvement cycles; comparing across practices. | Primary care practices; specialist departments or units; pathways of care. | How is my team performing? |
| Individual performance improvement. | Identifying trends in the management of patients; tailoring services to target groups. | Individual physicians; nurse/practitioners; other healthcare professionals. | How am I managing my practice panel? | |
| Informed choice. | Selecting a healthcare provider; participating in care decision-making; self-managing care needs. | Patients; family members and carers; public. | What treatment options or providers are best for me? | |
| Cross-cutting | Research. | Exploring the use of indicators across contexts. | Academia and academic networks; think tanks, research groups; topic-specific associations. | Secondary user-directed. |
Overview of methodological, contextual and managerial fitness for use considerations
| Clusters | Considerations | Guiding questions for considering an indicator’s use |
|
| ||
| Measures what matters. | Does anybody care? | |
| Wide engagement. | What can | |
| Easily interpreted. | Does the indicator signal a clear direction? | |
| Clear standardisation. | Is the indicator clearly defined and replicable? | |
| Alignment of accountability. | Are entry points for taking action feasible? | |
| Measurement matches delivery. | Is the indicator a reflection of the system? | |
| Sensitive to meaningful change. | Is the indicator sufficiently sensitive to change? | |
|
| ||
| Information infrastructure | Interoperability. | Can needed data be accessed? |
| Data quality. | Is the data of quality? | |
| Governance | Political will and vision. | Is there high-level commitment and direction for use? |
| Regulation for data protection. | Does existing legislation facilitate use? | |
| Cross-sector partnerships. | Are cross-sector partnerships in place? | |
| Aligned financing structures. | Do financing structures encourage the intended use? | |
| Workforce capacity | Data and quality expertise. | Are the competencies to interpret and use data in place? |
| Time dedicated to improvement. | Is time allocated to encourage use? | |
| Culture | Learning orientation. | Is an environment for learning cultivated? |
| Shared responsibility for health. | Do users feel accountable for improvement? | |
|
| ||
| Selecting healthcare performance indicators | Clear purpose of use. | What is the purpose of use? (eg, strategy development) |
| Target end user is known. | Is the target audience known? (eg, clinicians, public) | |
| Conceptual framework. | Is the dimension of quality pursued clear? | |
| Indicator quality. | Is the indicator scientifically sound? | |
| Source, type and availability of data. | What data are needed and are they available? (eg, administrative, clinical, survey data, wearables) | |
| Standards for appraisal. | How will improvements in performance be assessed? | |
| Degree of public disclosure. | Is the indicator for internal or external (public) use? | |
| Accompanying indicators. | Are there relevant accompanied indicators? | |
| Previous use. | Has the indicator been used previously? | |
| Accessing data | Representativeness of data. | Are the data complete? |
| Data linkages. | Can relevant data sources be linked? | |
| Data collection tools. | How will data be collected? (eg, paper-based, automated electronically, manual electronic entry) | |
| Unity of language/coding. | Is there consistency in coding across data to be used? | |
| Applying methods of analysis | Type of analysis. | How will the data be analysed? (eg, benchmarking, time trend, case mix correction) |
| Aggregation of indicators. | How can composites/indices be used to simplify data? | |
| Reference group. | Who is the reference group? | |
| Breakdowns/cohorts. | How will the data be disaggregated? (eg, age, sex, ethnicity, geographically) | |
| Calculation of values. | How will values be calculated? (eg, mean, median, SD, top 10% mean) | |
| Time interval. | Should a time trend be reported and at what interval? | |
| Application of risk adjustments. | How will risk adjustments be applied? (eg, variable specification, source, weighting scheme) | |
| Managing missing data. | How will missed data points be handled? | |
| Contextualising data. | What other data are needed to give the indicator meaning? | |
| Displaying findings | Chart options. | How will the data be visualised? (eg, chart, map, table) |
| Simplification techniques. | What techniques to simplify the meaning can be applied? (eg, colour, size variation, icons) | |
| Customisation of display. | How can users customise the data? (eg, change of display, change of information) | |
| Narrated interpretation. | How can the quality and the meaning of data be narrated? | |
| Format of reporting. | How will it be reported? (eg, print, mobile, web-based) | |
| Reaching decision-makers | Frequency of reporting. | What is the relevant reporting cycle (eg, real time, quarterly, annually, biennially) |
| Dissemination channels. | How will users be reached? (eg, mail, email, champions) | |
| Guidance on use. | How can users be supported to make use of findings? | |
Figure 3Use cycle for managing healthcare performance indicators.