Literature DB >> 19153129

Publication guidelines for quality improvement studies in health care: evolution of the SQUIRE project.

Frank Davidoff1, Paul Batalden, David Stevens, Greg Ogrinc, Susan E Mooney.   

Abstract

In 2005 we published draft guidelines for reporting studies of quality improvement, as the initial step in a consensus process for development of a more definitive version. The current article contains the revised version, which we refer to as standards for quality improvement reporting excellence (SQUIRE). This narrative progress report summarises the special features of improvement that are reflected in SQUIRE, and describes major differences between SQUIRE and the initial draft guidelines. It also briefly describes the guideline development process; considers the limitations of and unresolved questions about SQUIRE; describes ancillary supporting documents and alternative versions under development; and discusses plans for dissemination, testing, and further development of SQUIRE.

Entities:  

Mesh:

Year:  2009        PMID: 19153129      PMCID: PMC2769030          DOI: 10.1136/bmj.a3152

Source DB:  PubMed          Journal:  BMJ        ISSN: 0959-8138


Introduction

A great deal of meaningful and effective work is now done in clinical settings to improve the quality and safety of care. Unfortunately, relatively little of that work is reported in the biomedical literature, and much of what is published could be described more effectively. Failure to publish is potentially a serious barrier to the development of improvement science, because public sharing of concepts, methods, and findings is essential to the progress of all scientific work, both theoretical and applied. To help strengthen the evidence base for improvement in health care, we proposed draft guidelines for reporting planned original studies of improvement interventions in 2005.1 Our aims were to stimulate the publication of high calibre improvement studies and to increase the completeness, accuracy, and transparency of published reports of that work. Our initial draft guidelines were based largely on personal experience with improvement work, and were intended only as an initial step toward creation of recognised publication standards. We have now refined and extended that draft, and present here the resulting revised version, which we refer to as the standards for quality improvement reporting excellence or SQUIRE (table). In this narrative progress report, we describe the special features of quality improvement that are reflected in SQUIRE and examine the major differences between SQUIRE and the initial draft guidelines. We also briefly outline the consensus process used to develop SQUIRE, including our responses to critical feedback obtained during that process. Finally, we consider the limitations of and questions about the SQUIRE guidelines, describe ancillary supporting documents and various versions currently under development, and explain plans for their dissemination, testing, and further development.

SQUIRE guidelines (Standards for QUality Improvement Reporting Excellence)

Title and abstractDid you provide clear and accurate information for finding, indexing, and scanning your paper?
1. Title(a) Indicates the article concerns the improvement of quality (broadly defined to include the safety, effectiveness, patient centeredness, timeliness, efficiency, and equity of care)
(b) States the specific aim of the intervention
(c) Specifies the study method used—for example, qualitative study or randomised cluster trial
2. AbstractSummarises precisely all key information from various sections of the text using the abstract format of the intended publication
IntroductionWhy did you start?
3. Background knowledgeProvides a brief, non-selective summary of current knowledge of the care problem being investigated and characteristics of organisations in which it occurs
4. Local problemDescribes the nature and severity of the specific local problem or system dysfunction that was investigated
5. Intended improvement(a) Describes the specific aim (changes/improvements in care processes and patient outcomes) of the proposed intervention
(b) Specifies who (champions, supporters) and what (events, observations) triggered the decision to make changes and why now (timing)
6. Study questionStates precisely the primary improvement related question and any secondary questions that the study of the intervention was designed to answer
MethodsWhat did you do?
7. Ethical issuesDescribes ethical aspects of implementing and studying the improvement, such as privacy concerns, protection of participants’ physical wellbeing, and potential author conflicts of interest, and how ethical concerns were addressed
8. SettingSpecifies how elements of the local care environment considered most likely to influence change/improvement in the involved site or sites were identified and characterised
9. Planning theintervention(a) Describes the intervention and its component parts in sufficient detail that others could reproduce it
(b) Indicates main factors that contributed to choice of the specific intervention—eg, analysis of causes of dysfunction, matching relevant improvement experience of others with the local situation
(c) Outlines initial plans for how the intervention was to be implemented—eg, what was to be done (initial steps, functions to be accomplished by those steps, how tests of change would be used to modify intervention) and by whom (intended roles, qualifications, and training of staff)
10. Planning the study of the intervention(a) Outlines plans for assessing how well the intervention was implemented (dose or intensity of exposure)
(b) Describes mechanisms by which intervention components were expected to cause changes and plans for testing whether those mechanisms were effective
(c) Identifies the study design (eg, observational, quasi-experimental, experimental) chosen for measuring impact of the intervention on primary and secondary outcomes, if applicable
(d) Explains plans for implementing essential aspects of the chosen study design, as described in publication guidelines for specific designs, if applicable (see for example www.equator-network.org)
(e) Describes aspects of the study design that specifically concerned internal validity (integrity of the data) and external validity (generalisability)
11. Methods of evaluation(a) Describes instruments and procedures (qualitative, quantitative, or mixed) used to assess the effectiveness of implementation; the contributions of intervention components and context factors to effectiveness of the intervention; and primary and secondary outcomes
(b) Reports efforts to validate and test reliability of assessment instruments
(c) Explains methods used to assure data quality and adequacy—eg, blinding, repeating measurements and data extraction, training in data collection, collection of sufficient baseline measurements
12. Analysis(a) Provides details of qualitative and quantitative (statistical) methods used to draw inferences from the data
(b) Aligns unit of analysis with level at which the intervention was implemented, if applicable
(c) Specifies degree of variability expected in implementation, change expected in primary outcome (effect size), and ability of study design (including size) to detect such effects
(d) Describes analytical methods used to show effects of time as a variable (eg, statistical process control)
ResultsWhat did you find?
13. Outcomes(a) Nature of setting and improvement intervention:
 i) Characterises relevant elements of setting or settings (eg, geography, physical resources, organisational culture, history of change efforts) and structures and patterns of care (eg, staffing, leadership) that provided context for the intervention
 ii) Explains the actual course of the intervention (eg, sequence of steps, events, or phases; type and number of participants at key points), preferably using a timeline diagram or flow chart
 iii) Documents degree of success in implementing intervention components
 iv) Describes how and why the initial plan evolved, and the most important lessons learnt from that evolution, particularly the effects of internal feedback from tests of change (reflexiveness)
(b) Changes in processes of care and patient outcomes associated with the intervention:
 i) Presents data on changes observed in the care delivery process
 ii) Presents data on changes observed in measures of patient outcome (eg, morbidity, mortality, function, patient/staff satisfaction, service utilisation, cost, care disparities)
 iii) Considers benefits, harms, unexpected results, problems, failures
 iv) Presents evidence regarding the strength of association between observed changes or improvements and intervention components or context factors
 v) Includes summary of missing data for intervention and outcomes
DiscussionWhat do the findings mean?
14. Summary(a) Summarises the most important successes and difficulties in implementing intervention components, and main changes observed in care delivery and clinical outcomes
(b) Highlights the study’s particular strengths
15. Relation toother evidenceCompares and contrasts study results with relevant findings of others, drawing on broad review of the literature; use of a summary table may be helpful in building on existing evidence
16. Limitations(a) Considers possible sources of confounding, bias, or imprecision in design, measurement, and analysis that might have affected study outcomes (internal validity)
(b) Explores factors that could affect generalisability (external validity)—eg, representativeness of participants, effectiveness of implementation, dose-response effects, features of local care setting
(c) Considers likelihood that observed gains may weaken over time and describes plans, if any, for monitoring and maintaining improvement; explicitly states if such planning was not done
(d) Reviews efforts made to minimise and adjust for study limitations
(e) Assesses the effect of study limitations on interpretation and application of results
17. Interpretation(a) Explores possible reasons for differences between observed and expected outcomes
(b) Draws inferences consistent with the strength of the data about causal mechanisms and size of observed changes, paying particular attention to components of the intervention and context factors that helped determine the intervention’s effectiveness (or lack thereof), and types of settings in which this intervention is most likely to be effective
(c) Suggests steps that might be modified to improve future performance
(d) Reviews issues of opportunity cost and actual financial cost of the intervention
18. Conclusions(a) Considers overall practical usefulness of the intervention
(b) Suggests implications of this report for further studies of improvement interventions
Other informationWere there other factors relevant to the conduct and interpretation of the study?
19. FundingDescribes funding sources, if any, and role of funding organisation in design, implementation, interpretation, and publication of study

These guidelines provide a framework for reporting formal, planned studies designed to assess the nature and effectiveness of interventions to improve the quality and safety of care. It may not always be appropriate, or even possible, to include information about every numbered guideline item in reports of original studies, but authors should at least consider every item in writing their reports.

Although each major section (introduction, methods, results, and discussion) of a published original study generally contains some information about the numbered items within that section, information about items from one section (for example, the introduction) is also often needed in other sections (for example, the discussion).

SQUIRE guidelines (Standards for QUality Improvement Reporting Excellence) These guidelines provide a framework for reporting formal, planned studies designed to assess the nature and effectiveness of interventions to improve the quality and safety of care. It may not always be appropriate, or even possible, to include information about every numbered guideline item in reports of original studies, but authors should at least consider every item in writing their reports. Although each major section (introduction, methods, results, and discussion) of a published original study generally contains some information about the numbered items within that section, information about items from one section (for example, the introduction) is also often needed in other sections (for example, the discussion).

Special features of quality improvement

Unlike conceptually neat and procedurally unambiguous interventions such as drugs, tests, and procedures that directly affect the biology of disease, and are the objects of study in most clinical research, improvement is essentially a social process. Improvement is an applied science rather than an academic discipline2; its immediate purpose is to change human performance, rather than generate new, generalisable knowledge,3 and it is driven primarily by experiential learning.4 5 Like other social processes, improvement is inherently context dependent; it is reflexive, meaning that improvement interventions are repeatedly modified in response to outcome feedback, with the result that both its interventions and outcomes are relatively unstable; and it generally involves complex, multicomponent interventions. Although traditional experimental and quasi-experimental methods are important for learning whether improvement interventions change behaviour, they do not provide appropriate and effective methods for addressing the crucial pragmatic (or “realist”) questions about improvement that are derived from its complex social nature: what is it about the mechanism of a particular intervention that works, for whom, and under what circumstances?2 3 6 Using combinations of methods that answer both the experimental and pragmatic questions is not an easy task, because those two contrasting methodologies can sometimes work at cross purposes. For example, true experimental studies are designed to minimise the confounding effects of context, such as the impact of the heterogeneity of local settings, staff and other study participants, resources, and culture, on measured outcomes. But trying to control context out of improvement interventions is both inappropriate and counterproductive because improvement interventions are inherently and strongly context dependent.2 3 Similarly, true experimental studies require strict adherence to study protocols because it reduces the impact of many potential confounders. But rigid adherence to initial improvement plans is incompatible with an essential element of improvement, which is continued modification of those plans in response to outcome feedback (reflexiveness). We have attempted to maintain a balance between experimental and pragmatic (or realist) methodologies in the SQUIRE guidelines; both are important and necessary, and they are mutually complementary.

Differences between SQUIRE and draft guidelines

The SQUIRE guidelines differ in several important ways from the initial draft guidelines. Firstly, as noted, SQUIRE highlights more explicitly the essential and unique properties of improvement interventions, particularly their social nature, focus on changing performance, context dependence, complexity, nonlinearity, adaptation, and iterative modification based on outcome feedback (reflexiveness). Secondly, SQUIRE distinguishes more clearly between improvement practice (planning and implementing improvement interventions) and the evaluation of improvement projects (designing and carrying out studies to assess whether those interventions work and why they do or do not work). Thirdly, SQUIRE now explicitly specifies elements of study design that make it possible to assess both whether improvement interventions work (by minimising bias and confounding) and why interventions are or are not effective (by identifying the effects of context and identifying mechanisms of change). And finally, SQUIRE explicitly addresses the often confusing ethical dimensions of improvement projects and improvement studies.7 8 Other differences between SQUIRE and the draft guidelines are available on the SQUIRE website (www.squire-statement.org).

The development process

The SQUIRE development process was designed to produce consensus among a broad constituency of experts and users on both the content and format of guideline items. It proceeded along the following six lines. We first obtained informal feedback on the utility, strengths, and limitations of the draft guidelines from potential authors in a series of seminars at national and international meetings, as well as from experienced publication guideline developers at the organisational meeting of the EQUATOR network.9 Authors, peer reviewers, and journal editors then “road tested” the draft guidelines as a working tool for editing and revising submitted manuscripts.10 11 Next, we solicited and published written commentaries on the initial version of the guidelines.12 13 14 15 16 We also did a literature review on epistemology, methodology, and the evaluation of complex interventions, particularly in social sciences. In April 2007, we subjected the draft guidelines to intensive analysis, comment, and recommendations for change at a two day meeting of 30 stakeholders. After that meeting, we obtained further critical appraisal of the guidelines through three cycles of a Delphi process with an international group of more than 50 consultants.

Informal feedback

Informal input about the draft guidelines from authors and peer reviewers raised four particularly relevant issues: uncertainty as to which studies the guidelines apply; the possibility that their use might force quality improvement reports into a rigid, narrow format; the concern that their slavish application might result in lengthy and unreadable reports that are indiscriminately laden with detail; and difficulty knowing if, when, and how other publication guidelines should be used in conjunction with guidelines for reporting quality improvement studies.

Deciding when to use the guidelines

Publications on improvement in health care are emerging in four general categories: empirical studies on the effectiveness of quality improvement interventions; stories, theories, and frameworks; literature reviews and syntheses; and the development and testing of improvement related tools and methods (L Rubenstein et al, unpublished data). Our guideline development process has made it clear that the SQUIRE guidelines can and should apply to reports in the first category: original, planned studies of interventions that are designed to improve clinical outcomes by delivering clinically proved care measures more appropriately, effectively, and efficiently.

Forcing articles into a rigid format

Publication guidelines are often referred to as checklists because, like other such documents, they serve as aide-mémoires, which have proved increasingly valuable in managing information in complex systems.17 Rigid or mechanical application of checklists can prevent users from making sense of complex information.18 19 At the same time, however (and paradoxically), checklists, like all constraints and reminders, can serve as important drivers for creativity. The SQUIRE guidelines must therefore always be understood and used as signposts, not shackles.20

Creating longer articles

Improvement is a complex undertaking, and its evaluation can produce substantial amounts of qualitative and quantitative information. Adding irrelevant information simply to “cover” guideline items would be counterproductive; on the other hand, added length that makes reports of improvement studies more complete, coherent, usable, and systematic helps the guidelines meet a principal aim of SQUIRE. Publishing portions of improvement studies only in electronic form can make the content of long articles publicly available while conserving space in print publication.

Conjoint use with other publication guidelines

Most other biomedical publication guidelines are designed to improve the reporting of studies that use specific experimental designs. The SQUIRE guidelines, in contrast, are concerned with the reporting of studies in a defined content area—improvement and safety. These two guideline types are therefore complementary, rather than redundant or conflicting. When appropriate, other specific design related guidelines can and should be used in conjunction with SQUIRE.

Formal commentaries

The written commentaries provided both supportive and critical input on the draft guidelines.12 13 14 15 16 One suggested that the guidelines’ “pragmatic” focus was an important complement to guidelines for reporting traditional experimental clinical science.12 The guidelines were also seen as a potentially valuable instrument for strengthening the design and conduct of improvement research, resulting in greater synergy with improvement practice15 and increasing the feasibility of combining improvement studies in systematic reviews. However, other commentaries on the draft guidelines raised concerns: that they were inattentive to racial and ethnic disparities in care14; that their proposed introduction, methods, results, and discussion (IMRaD) structure might be incompatible with the reality that improvement interventions are designed to change over time13; and that their use could result in a “dumbing down” of improvement science.16 Our responses to these concerns are as follows.

Health disparities

We do not believe it would be useful, even if it were possible, to address every relevant content issue in a concise set of quality improvement reporting guidelines. We do agree, however, that disparities in care are not considered often enough in improvement work, and that improvement initiatives should address this important issue whenever possible. We have therefore highlighted this issue in the SQUIRE guidelines (table, item 13.b.1).

IMRaD structure

The study protocols traditionally described in the methods section of clinical trials are rigidly fixed, as required by the dictates of experimental design.21 In contrast, improvement is a reflexive learning process—that is, improvement interventions are most effective when they are modified in response to outcome feedback. On these grounds, it has been suggested that reporting improvement interventions in the IMRaD format logically requires multiple, sequential pairs of methods and results sections, one pair for each iteration of the evolving intervention.13 We maintain, however, that the changing, reflexive nature of improvement does not exempt improvement studies from answering the four fundamental questions required in all scholarly inquiry: Why did you start? What did you do? What did you find? What does it mean? These same questions define the four elements of the IMRaD framework.22 23 Although some authors and editors might understandably choose to use a modified IMRaD format that involves a series of small sequential methods and results sections, we believe that approach is often both unnecessary and confusing. We therefore continue to support describing the initial improvement plan, and the theory (mechanism) on which it is based, in a single methods section. Because the changes in interventions over time and the learning that comes from making those changes are themselves important outcomes in improvement projects, in our view they belong collectively in a single results section.1

Dumbing down improvement reports

The declared purpose of all publication guidelines is to improve the completeness and transparency of reporting. Because it is precisely these characteristics of reporting that make it possible to detect weak, sloppy, or poorly designed studies, it is difficult to understand how use of the draft guidelines might lead to a dumbing down of improvement science. The underlying concern here therefore seems to have less to do with transparency than with the inference that the draft guidelines failed to require sufficiently rigorous standards of evidence.16 21 We recognise that those traditional experimental standards are powerful instruments for protecting the integrity of outcome measurements, largely by minimising selection bias.21 24 Although those standards are necessary in improvement studies, they are not sufficient because they fail to take into account the particular epistemology of improvement that derives from its applied purpose and social nature. As noted, the SQUIRE guidelines specify methodologies that are appropriate for both experimental and pragmatic (or realist) evaluation of improvement programmes.

Consensus meeting of editors and research scholars

With support from the Robert Wood Johnson Foundation, we undertook an intensive critical appraisal of the draft guidelines at a two day meeting in April 2007. Thirty participants attended, including clinicians, improvement professionals, epidemiologists, clinical researchers, and journal editors, several from outside the United States. Before the meeting, we sent participants a reading list and a concept paper on the epistemology of improvement. In plenary and small group sessions, participants critically discussed and debated the content and wording of every item in the draft guidelines and recommended changes. They also provided input on plans for dissemination, adoption, and future uses of the guidelines. Working from transcribed audiorecordings of all meeting sessions and flip charts listing the key discussion points, a coordinating group (the authors of this paper) then revised, refined, and expanded the draft guidelines.

Delphi process

Following the consensus meeting, we circulated sequential revisions of the guidelines for further comment and suggestions in three cycles of a Delphi process. The group involved in that process included the meeting participants and roughly 20 additional expert consultants. We then surveyed all participants as to their willingness to endorse the final consensus version (SQUIRE).

Limitations and questions

The SQUIRE guidelines have been characterised as providing both too little and too much information: too little, because they fail to represent adequately the many unique and nuanced issues in the practice and evaluation of improvement2 3 4 12 13 14 15 16 21 24 25; too much, because the detail and density of the item descriptions might seem intimidating to authors. We recognise that the SQUIRE item descriptions are much more detailed than those of some other publication guidelines. In our view, however, the complexity of the improvement process, plus the relative unfamiliarity of improvement interventions and of the methods for evaluating them, justify that level of detail, particularly in light of the diverse backgrounds of people working to improve health care. Moreover, the level of detail in the SQUIRE guidelines is quite similar to that of recently published guidelines for reporting observational studies, which also involve considerable complexities of study design.26 To increase the usability of SQUIRE, we are making available a shortened electronic version on the SQUIRE website, accompanied by a glossary of terms used in the item descriptions that may be unfamiliar to users.

Applying SQUIRE

Authors’ interest in using publication guidelines increases when journals make them part of the peer review and editorial process. We therefore encourage the widest possible use of the SQUIRE guidelines by editors. Unfortunately, little is known about the most effective ways to apply publication guidelines in practice. Therefore, editors have been forced to learn from experience how to use other publication guidelines, and the specifics of their use vary widely from journal to journal. We also lack systematic knowledge of how authors can use publication guidelines most productively. Our experience suggests, however, that SQUIRE is most helpful if authors simply keep the general content of the guideline items in mind as they write their initial drafts, then refer to the details of individual items as they critically appraise what they have written during the revision process. The most effective way to use publication guidelines in practice seems to us to be an empirical question; we therefore strongly encourage editors and authors to collect, analyse, and report their experiences in using SQUIRE and other publication guidelines.

Current and future directions

A SQUIRE explanation and elaboration document has been published elsewhere.27 Like other such documents,28 29 30 31 this document provides much of the necessary depth and detail that cannot be included in a set of concise guideline items. It presents the rationale for including each guideline item in SQUIRE, along with published examples of reporting for each item, and commentary on the strengths and weaknesses of those examples. The SQUIRE website (www.squire-statement.org) will provide an authentic electronic home for the guidelines and a medium for their progressive refinement. We also intend the site to serve as an interactive electronic community for authors, students, teachers, reviewers, and editors who are interested in the emerging body of scholarly and practical knowledge on improvement. Although the primary purpose of SQUIRE is to enhance the reporting of improvement studies, we believe the guidelines can also be useful for educational purposes, particularly for understanding and exploring further the epistemology of improvement and the methods for evaluating improvement work. We believe, similarly, that SQUIRE can help in planning and executing improvement interventions, carrying out studies of those interventions, and developing skill in writing about improvement. We encourage these uses, as well as efforts to assess SQUIRE’s impact on the completeness and transparency of published improvement studies32 33 and to obtain empirical evidence that individual guideline items contribute materially to the value of published information in improvement science.
  22 in total

1.  Value of flow diagrams in reports of randomized controlled trials.

Authors:  M Egger; P Jüni; C Bartlett
Journal:  JAMA       Date:  2001-04-18       Impact factor: 56.272

2.  Consensus publication guidelines: the next step in the science of quality improvement?

Authors:  R G Thomson
Journal:  Qual Saf Health Care       Date:  2005-10

3.  Broadening the view of evidence-based medicine.

Authors:  D M Berwick
Journal:  Qual Saf Health Care       Date:  2005-10

4.  The checklist: if something so simple can transform intensive care, what else can it do?

Authors:  Atul Gawande
Journal:  New Yorker       Date:  2007-12-10

5.  The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies.

Authors:  Erik von Elm; Douglas G Altman; Matthias Egger; Stuart J Pocock; Peter C Gøtzsche; Jan P Vandenbroucke
Journal:  Ann Intern Med       Date:  2007-10-16       Impact factor: 25.391

6.  Reporting randomized controlled trials. An experiment and a call for responses from readers.

Authors:  D Rennie
Journal:  JAMA       Date:  1995-04-05       Impact factor: 56.272

7.  The ethics of using quality improvement methods in health care.

Authors:  Joanne Lynn; Mary Ann Baily; Melissa Bottrell; Bruce Jennings; Robert J Levine; Frank Davidoff; David Casarett; Janet Corrigan; Ellen Fox; Matthew K Wynia; George J Agich; Margaret O'Kane; Theodore Speroff; Paul Schyve; Paul Batalden; Sean Tunis; Nancy Berlinger; Linda Cronenwett; J Michael Fitzmaurice; Nancy Neveloff Dubler; Brent James
Journal:  Ann Intern Med       Date:  2007-04-16       Impact factor: 25.391

8.  A next step: reviewer feedback on quality improvement publication guidelines.

Authors:  Tom Janisse
Journal:  Perm J       Date:  2007

9.  The international EQUATOR network: enhancing the quality and transparency of health care research.

Authors:  Nikolaos Pandis; Zbys Fedorowicz
Journal:  J Appl Oral Sci       Date:  2011-10       Impact factor: 2.698

10.  The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: explanation and elaboration.

Authors:  G Ogrinc; S E Mooney; C Estrada; T Foster; D Goldmann; L W Hall; M M Huizinga; S K Liu; P Mills; J Neily; W Nelson; P J Pronovost; L Provost; L V Rubenstein; T Speroff; M Splaine; R Thomson; A M Tomolo; B Watts
Journal:  Qual Saf Health Care       Date:  2008-10
View more
  41 in total

1.  A cloud-based simulation architecture for pandemic influenza simulation.

Authors:  Henrik Eriksson; Massimiliano Raciti; Maurizio Basile; Alessandro Cunsolo; Anders Fröberg; Ola Leifler; Joakim Ekberg; Toomas Timpka
Journal:  AMIA Annu Symp Proc       Date:  2011-10-22

2.  Implementing the patient-centered medical home: observation and description of the national demonstration project.

Authors:  Elizabeth E Stewart; Paul A Nutting; Benjamin F Crabtree; Kurt C Stange; William L Miller; Carlos Roberto Jaén
Journal:  Ann Fam Med       Date:  2010       Impact factor: 5.166

3.  Evaluation of patient centered medical home practice transformation initiatives.

Authors:  Benjamin F Crabtree; Sabrina M Chase; Christopher G Wise; Gordon D Schiff; Laura A Schmidt; Jeanette R Goyzueta; Rebecca A Malouin; Susan M C Payne; Michael T Quinn; Paul A Nutting; William L Miller; Carlos Roberto Jaén
Journal:  Med Care       Date:  2011-01       Impact factor: 2.983

4.  Improving and sustaining a reduction in iatrogenic pneumothorax through a multifaceted quality-improvement approach.

Authors:  Lisa Shieh; Minjoung Go; Daniel Gessner; Jonathan H Chen; Joseph Hopkins; Paul Maggio
Journal:  J Hosp Med       Date:  2015-06-03       Impact factor: 2.960

5.  Finding Joy in the Practice of Implementation Science: What Can We Learn from a Negative Study?

Authors:  Lisa V Rubenstein
Journal:  J Gen Intern Med       Date:  2019-01       Impact factor: 5.128

6.  Effects of medical scribes on physician productivity in a Canadian emergency department: a pilot study.

Authors:  Peter S Graves; Stephen R Graves; Tanvir Minhas; Rebecca E Lewinson; Isabelle A Vallerand; Ryan T Lewinson
Journal:  CMAJ Open       Date:  2018-09-04

7.  [Not Available].

Authors:  Cynthia Tanguay; Denis Lebel; Jean-François Bussières
Journal:  Can J Hosp Pharm       Date:  2013-01

8.  Oversight on the borderline: Quality improvement and pragmatic research.

Authors:  Jonathan A Finkelstein; Andrew L Brickman; Alexander Capron; Daniel E Ford; Adrijana Gombosev; Sarah M Greene; R Peter Iafrate; Laura Kolaczkowski; Sarah C Pallin; Mark J Pletcher; Karen L Staman; Miguel A Vazquez; Jeremy Sugarman
Journal:  Clin Trials       Date:  2015-09-15       Impact factor: 2.486

Review 9.  How Well Is Quality Improvement Described in the Perioperative Care Literature? A Systematic Review.

Authors:  Emma L Jones; Nicholas Lees; Graham Martin; Mary Dixon-Woods
Journal:  Jt Comm J Qual Patient Saf       Date:  2016-05

10.  "Getting your message through": an editorial guide for meeting publication standards.

Authors:  Kjetil G Ringdal; Hans Morten Lossius; Kjetil Søreide
Journal:  Scand J Trauma Resusc Emerg Med       Date:  2009-12-23       Impact factor: 2.953

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.