| Literature DB >> 29873794 |
Danika Barry1, Leighann E Kimble2, Bejoy Nambiar3,4, Gareth Parry5, Ashish Jha6, Vijay Kumar Chattu7,8, M Rashad Massoud2, Don Goldmann5.
Abstract
Improving health care involves many actors, often working in complex adaptive systems. Interventions tend to be multi-factorial, implementation activities diverse, and contexts dynamic and complicated. This makes improvement initiatives challenging to describe and evaluate as matching evaluation and program designs can be difficult, requiring collaboration, trust and transparency. Collaboration is required to address important epidemiological principles of bias and confounding. If this does not take place, results may lack credibility because the association between interventions implemented and outcomes achieved is obscure and attribution uncertain. Moreover, lack of clarity about what was implemented, how it was implemented, and the context in which it was implemented often lead to disappointment or outright failure of spread and scale-up efforts. The input of skilled evaluators into the design and conduct of improvement initiatives can be helpful in mitigating these potential problems. While evaluation must be rigorous, if it is too rigid necessary adaptation and learning may be compromised. This article provides a framework and guidance on how improvers and evaluators can work together to design, implement and learn about improvement interventions more effectively.Entities:
Mesh:
Year: 2018 PMID: 29873794 PMCID: PMC5909667 DOI: 10.1093/intqhc/mzy008
Source DB: PubMed Journal: Int J Qual Health Care ISSN: 1353-4505 Impact factor: 2.038
Figure 1.The evaluation continuum.
Figure 2.Framework for learning about improvement.