Ivan D Florez1, Yasser Sami Amer2, Michael McCaul3, John N Lavis4, Melissa Brouwers5. 1. Department of Pediatrics, Universidad de Antioquia, Medellin, Colombia; School of Rehabilitation Science, McMaster University, Hamilton, Canada. Electronic address: ivan.florez@udea.edu.co. 2. Pediatrics Department, King Khalid University Hospital, King Saud University Medical City, Riyadh, Saudi Arabia; Clinical Practice Guidelines Unit, Quality Management Department, King Saud University Medical City, Riyadh, Saudi Arabia; Research Chair for Evidence-Based Health Care and Knowledge Translation, Deanship of Scientific Research, King Saud University, Riyadh, Saudi Arabia; Alexandria Center for Evidence-Based Clinical Practice Guidelines, Alexandria University Medical Council, Alexandria University, Alexandria, Egypt. 3. Centre for Evidence-based Health Care, Division of Epidemiology and Biostatistics, Department of Global Health, Stellenbosch University, South Africa. 4. McMaster Health Forum and Department of Health Research Methods, Evidence and Impact, McMaster University, Hamilton, Ontario, Canada; Africa Centre for Evidence, University of Johannesburg, Johannesburg, South Africa. 5. School of Epidemiology and Public Health, University of Ottawa, Ottawa, Ontario, Canada.
Abstract
Entities:
Keywords:
AGREE II instrument; COVID-19; Clinical practice guidelines; Quality; Rapid guidelines
The COVID-19 global pandemic led to a substantial investment into the funding, execution, and publication/dissemination of research at unprecedented speeds and volumes. This, in turn, has created an urgent need to manage and curate this evidence to inform public health, clinical and health-system decision-making. Eager to provide some type of guidance in a scenario of uncertainty, many organizations started developing guidelines to provide recommendations to manage COVID-19 patients. Clinical practice guidelines (CPGs) have been published since the early stages of the pandemic, often in very short time frames, with a scarcity of evidence, with evidence that in other contexts would be considered of questionable quality and using methods that do not meet traditional development and reporting norms.Some of these early CPGs were labeled as “rapid advice”, “rapid guidelines”, or “interim recommendations” [1], [2], [3] implying they were developed under a pressuring situation, their methods were expedited, or developers applied methodological shortcuts. However, most of the CPGs developed during 2020, presumably under time constraints, were not labeled as rapid guidelines. Rather, they were labeled or presented as regular guidelines, using the usual terms: “guidelines”, “statements”, or “recommendations.”Since the development of traditional (de novo) evidence-based guidelines usually require long periods of time and significant funding, rapid guidelines are necessary, and considered acceptable, in cases of an emergency scenario or where urgent guidance is required. Rapid Guidelines (RGs) are defined as those developed in short timeframes (ie, 1–3 months) [1,2,4], although during the COVID-19 pandemic they were often developed in even shorter timeframes. To produce evidence-based guidance in a short time frame, because of the urgency, RGs often are developed with methodological shortcuts. The challenge in these processes is to develop guidance at a high speed without compromising the methodological rigor and validity, and therefore, their trustworthiness.Early in the pandemic, the COVID-Evidence Network to support decision-making (COVID-END) initiative (https://www.mcmasterforum.org/networks/covid-end) was created to facilitate the dissemination of evidence synthesis results to inform decision-making and to reduce duplication of efforts in evidence synthesis and evidence-based guidance. Additionally, a group of international researchers interested in the quality of the guidelines partnered with members of the COVID-END recommending working group to monitor and regularly assess the COVID-19 guidelines quality. A living review of guidelines was designed to assess the CPGs focused on the management of critically ill patients with COVID-19 and its first report is published in this journal's volume [5]. This commentary discusses the insights from the mentioned assessment, examines the key concepts related to the quality of guidelines developed in emergency situations, the problem of duplication with rapid guidelines during the pandemic, and provides some ideas about solutions and further steps for this type of guidelines.
Quality of guidelines
Quality of COVID-19 guidelines
Previous publications have raised concerns about the quality of guidelines related to COVID-19. Two assessments of guidelines developed very early in the pandemic (guidelines published before April 2020) [6,7] found that their methodological quality was poor in almost all the cases. Stamm et al., found that only 8 out of 188 guidelines could be considered as of high quality [7]. These authors found that most of the guidelines lacked appropriate systematic reviews, failed in performing evidence quality assessment, and their editorial independence was unclear, among other methodological flaws [6]. Although we could have expected that the quality would improve over time, this was not the case. Several authors have shown how the quality of guidelines published later in the pandemic and even more recently is still suboptimal [8], [9], [10], [11], [12].
Assessment of the quality of rapid guidelines
The Appraisal of Guidelines for REsearch & Evaluation II, the AGREE II, is the most widely used tool to appraise the quality of CPGs [13], [14], [15]. The AGREE II tool was developed to assess “conventional” or de novo guidelines; it has been suggested that in a pandemic scenario its standards may be too strict and impossible to meet [16]. We disagree. The AGREE II is useful for assessing rapid guidelines as it provides a blueprint to highlight the strengths and limitations of a guideline, and provides an estimate of trustworthiness, and a mechanism for public health decision-makers, clinicians and health system leaders to have a better sense of likely impact on patients and populations and successful implementation [14,15]. Using the AGREE II to assess rapid guidelines does not presume those with lower scores (than perhaps what would be possible in non-pandemic times) are not useful – but it can serve to manage expectations and inform guideline updates or revisions. Moreover, it should be noted that an assessment of the quality of rapid guidelines before the pandemic showed that a good AGREE II score is possible even for guidelines developed in a very short time frame [1,17]. Therefore, even under emergent situations, the development of trustworthy recommendations with the highest possible methodological standards is possible and should be a goal of any guideline development or adaptation process, especially in times where the stakes are high.
Methodological problems
The above-mentioned methodological problems are not different from the quality assessment performed for other diseases before the pandemic. Using different thresholds on the AGREE II rigour of development or the overall domain scores as criteria for defining a high-quality guideline, previous reports have highlighted that most of guidelines are of limited to poor quality. Thus, in the context of the pandemic, under a unique scenario where there is extreme pressure for developing guidance, it is not surprising that the quality of COVID-19 guidance ended up being suboptimal.However, there are additional factors to consider that worsen the situation in a pandemic. First, in the beginning of the pandemic there was a very large degree of uncertainty about the disease, and thus, in this situation recommendations provided by well-known experts would be easily adopted and implemented by many stakeholders regardless of the quality of the process or evidence that underpinned them. Second, the disease has a high lethality among some population groups. This increased the anxiety of developers and clinicians (among other decision-makers) which may have led them to recommend anything regardless of the lack of evidence of safety and efficacy. In some cases, so-called "compassionate use" became a justification for the use of interventions of unknown safety and efficacy [18]. Thus, the pandemic became an excuse for implementing non-systematic shortcuts to develop guidelines in a short time with the risk of recommending non effective or harmful interventions.
Duplication of efforts
Although quality is a significant methodological problem, duplication of efforts should not be overlooked. Duplication has been a significant problem for research and evidence synthesis processes during the pandemic [19,20]. While guidelines, as it occurs with health technology assessments (HTAs), are unique as recommendations need to be tailored to specific contexts and may not be applicable to all jurisdictions, unnecessary duplication is not desirable, especially if high-quality guidelines on the same topic exist. Duplication of efforts in guidelines results in guidelines organizations, professional societies and governments struggling simultaneously to evaluate and analyze the same evidence [21]. This, in turn, results in wasted money and time, and increased in delays to release of, in most of the cases, similar recommendations. This can be taxing in contexts where the impact of COVID-19 is particularly acute and exaggerated by complex challenges, such as in low- and middle-income countries (LMIC). Even in often nuanced and unique setting in LMICs, where contextualization of the evidence is required and guidelines specific for those contexts is a need, there is some degree of duplication that could be avoided. The evidence synthesis process, required in any guideline's development process (eg, GRADE Evidence Profiles and Evidence-to-Decision [EtD] judgments) should be shared among guidelines organizations and they can benefit from shorter and less expensive development processes.
Potential solutions
We have witnessed how the international guidelines community was not prepared to deal with a situation like the COVID-19 pandemic. While finding low quality in the COVID-19 CPGs is not surprising, it was avoidable, and we need to learn from some of our mistakes and experiences. Potential solutions and lessons for future emergency situations and scenarios where rapid guidelines are a need can be summarized in: i) developing clear methodological guidance and templates for rapid guidance and systematic reviews, ii) reducing duplication of efforts via encouraging adoption/adaptation when possible, and iii) enhancing CPGs registration and collaboration; iv) enhancing the coordination with evidence synthesis teams; and v) developing and maintaining appropriate evidence synthesis repositories; and vi) strengthening the CPGs editorial processes. Some of these issues that have been discussed at length by the CPGs’ community for years with insufficient progress made toward a solution; the COVID-19 pandemic amplified the consequences of this lack of progress, and we should take the opportunity to improve and prepare for future emergency scenarios.
Clear methodological guidance for rapid guidelines development
Methods for CPGs’ development have received substantial input in the last decades. Methods and tools such as Guidelines 2.0 [22], GRADE [23], and AGREE [14], have boosted this development. However, before the pandemic started there was not enough methodological guidance on how to develop evidence-based guidance in a rapid way maintaining the rigor and trustworthiness. Before the 2020, there was scarcity of resources on the methodological elements for developing rapid and trustworthy guidelines and provided some guidance for their development [1], [2], [3]. With the pandemic some guidance has been published. For instance, recent papers have provided insights and guidance on how to develop trustworthy recommendations using GRADE as part of an urgent response [17,24] or guidance on methods for conducting rapid systematic reviews [25]. We agree that, applying high methodological standards in the development of evidence-based guidance is even more important in times of crisis [26] with the overall aim of balancing rigor and speed. The availability of this guidance should discourage the development of low-quality guidelines with the rationale that high methodological standards cannot be achieved in emergency situations. The need for explicit and transparent descriptions of the methods used, and steps skipped, or short-cuts taken in response to an urgent manner, is critical to ensuring users understand the strengths and limitations of the recommendations they plan to implement.
Encouraging adoption and adaptation
Methodological approaches for adopting or adapting CPGs, with the aim of developing recommendations in a more efficient way have been developed in the last decade. Approaches as ADAPTE [27], CAN-IMPLEMENT [28], GRADE-ADOLOPMENT [29], among others [30] can be applied to reduce duplication and efficiently develop recommendations. During pandemic, for instance, identifying high-quality guidance for specific questions could facilitate the work for organizations that are planning to develop a new CPG. Adapting those CPGs can provide high-quality, trustworthy, and contextualized recommendations in a short time frame. Moreover, identifying high-quality CPGs’ recommendations from trustworthy organizations which are also considered implementable and feasible in specific context, can lead decision-makers to adopt or endorse some or all the recommendations without major changes. The benefit of this approach is evident, as users can have trustworthy recommendations almost immediately.Toward reducing duplication and research waste, and supporting the evidence-demand, COVID-END has produced useful resources for evidence synthesis and CPG/HTAs, including highlighting various tools, technologies, and CPG adaptation examples (https://www.mcmasterforum.org/networks/covid-end/resources-for-researchers/supports-for-guidance-developers). This resource facilitates accessing to useful information and tools for supporting guidelines’ developers, adapters, and users that will enhance methodological decisions, and may facilitate collaboration.Moreover, early this year the COVID19 Recommendations and Gateway to Contextualization, or eCOVID-19 RecMap (https://covid19.recmap.org/) initiative was recently launched [31]. It provides recommendations from CPGs that have been assessed with the AGREE II tool, and its content is routinely updated. It facilitates the identification of the best recommendations for specific questions to be adopted and used and also provides a platform that facilitates the adaptation and contextualization through the GRADE-ADOLOPMENT framework [29]. This initiative has the potential to facilitate adoption, adaptation and developing contextualized recommendations, and therefore, to reduce duplication and enhance the quality of future CPGs.
Encouraging registration and collaboration
The high number of guidelines produced in a short timeframe during 2020 and 2021 may reflect a low degree of collaboration among CPGs organizations. Collaboration among CPG developers from different organizations, with varied scopes and structures and different countries may be a challenge, much more in the context of an emergency. However, collaboration may allow organizations to identify areas of common interest, distribute work (eg, systematic reviews, questions, among others), exchange evidence and information, and in some cases, to develop joint guidance [32]. This can reduce duplication of efforts, decrease development time, and make a faster and more efficient process. Registering guidelines, for instance, is an initiative that may facilitate the identification of groups working on specific topics or questions for guidelines and can be the start of a collaboration between organizations to reduce duplication [33,34], and promote guideline adoption or adaptation. Existing guidelines registries include the International Practice Guideline Registry Platform (www.guidelines-registry.org/), the Australian Clinical Practice Guidelines portal (www.clinicalguidelines.gov.au/register) and the Guidelines-International-Network portal (https://guidelines.ebmportal.com/). Contacting groups working on similar guidelines identified in these registries may lead to sharing outputs or summaries, such as summary of findings tables, risk of bias assessments, or GRADE evidence-to Decision frameworks (EtD) among developers, which will make more efficient development processes.
Strengthening guidelines editorial processes
Guidelines are published in scientific journals or in the official website of developing and endorsing organizations. Regardless of the publication strengthening the editorial and peer review process may help in the process of increasing the quality of future rapid guidelines. For instance, the Enhancing the QUAlity and Transparency Of health Research (EQUATOR) network [35] recommends the use of AGREE reporting and RIGHT statement checklists to support reporting [36,37]. These checklists may improve the guidelines’ reporting and enhance their methodology and trustworthiness. Additionally, as described above, encouraging previous registration by journals’ editors, as an expectation of guidelines seeking publication, would likely impact quality and transparency.Moreover, another way to improve guidelines is through a rigorous external review process including both a content and a methodological review. This process may provide an opportunity to identify evidence that has not been previously included, and increases the accountability among CPGs’ panels [38]. Encouraging external reviewers to use established methodologies to perform the draft CPG's review (eg, using the AGREE II tool) might benefit the final CPG manuscript. We have no data to evaluate the peer review process of rapid guidelines developed during the pandemic, but considering the low quality that has been described, very likely a good number of guidelines had a review of their content and had scarce or no review of their methodologies during the review process.
Coordination with evidence synthesis teams
Evidence synthesis processes have faced similar challenges as those described for CPGs. Low quality and duplication of efforts have also been reported during the pandemic. Nonetheless, the pandemic has also brought some benefits, such as, for instance, advances in the living systematic reviews methods. We witnessed international collaboration in production of living reviews, for example, those focused on vaccine effectiveness [39] or those that are linked to living guidelines [40], such is the case of the living review and network meta-analysis on pharmacological treatments [41]. Facilitating coordination between evidence synthesis teams and CPG developers, streamlines and makes the process more efficient, and may reduce duplication of efforts.
Developing and maintaining appropriate evidence synthesis repositories
Pandemic also saw the rise of international efforts and cooperation to create and main repositories of relevant evidence resources. For example, the COVID-END initiative (https://www.mcmasterforum.org/networks/covid-end) has developed a repository of ‘best evidence syntheses’, many of which are regularly updated. Guidelines’ developers in different contexts may benefit from these repositories to inform their guidelines. As a result, CPGs process can be expedited, and recommendations can benefit from this. Further research about how CPGs’ developers used these repositories is warranted.
Conclusion
We hereby recommend promoting more inclusiveness and collaboration in CPG projects among decision-makers and methodologists using formal evidence-based methodologies for de novo, or adaptation of rapid guidelines, especially for high priority urgent and emergent health topics like COVID-19. Agreeing on the best methodology for rapid guidelines development, encouraging adaptation when possible, registering the CPG project and strengthening the editorial process, enhancing coordination between evidence synthesis and CPG development or adaptation teams, and maintaining and using evidence synthesis repositories are some actions that may facilitate collaboration and reduce duplication to developing CPGs in a more efficient way.
This work did not have any external financial support.
Disclosures
IDF, MB and JNL are part of the AGREE collaboration.IDF, MB and JNL are co-investigators and Principal investigator, respectively, of the COVID-END Canada initiative which is funded by the Canadian Institute of Health Research.
Authors: B Fervers; J S Burgers; R Voellinger; M Brouwers; G P Browman; I D Graham; M B Harrison; J Latreille; N Mlika-Cabane; L Paquet; L Zitzelsberger; B Burnand Journal: BMJ Qual Saf Date: 2011-01-05 Impact factor: 7.035
Authors: Jessica Arieta-Miranda; Abad Salcedo Alcaychahua; Gary Pereda Santos; Manuel Chávez Sevillano; Rosa Lara Verástegui; Daniel Blanco Victorio; Gilmer Torres Ramos Journal: Heliyon Date: 2020-12-09
Authors: Xufei Luo; Yunlan Liu; Mengjuan Ren; Xianzhuo Zhang; Estill Janne; Meng Lv; Qi Wang; Yang Song; Joseph L Mathew; Hyeong Sik Ahn; Myeong Soo Lee; Yaolong Chen Journal: J Evid Based Med Date: 2021-02-09
Authors: Arnav Agarwal; Bram Rochwerg; François Lamontagne; Reed Ac Siemieniuk; Thomas Agoritsas; Lisa Askie; Lyubov Lytvyn; Yee-Sin Leo; Helen Macdonald; Linan Zeng; Wagdy Amin; André Ricardo Araujo da Silva; Diptesh Aryal; Fabian AJ Barragan; Frederique Jacquerioz Bausch; Erlina Burhan; Carolyn S Calfee; Maurizio Cecconi; Binila Chacko; Duncan Chanda; Vu Quoc Dat; An De Sutter; Bin Du; Stephen Freedman; Heike Geduld; Patrick Gee; Matthias Gotte; Nerina Harley; Madiha Hashimi; Beverly Hunt; Fyezah Jehan; Sushil K Kabra; Seema Kanda; Yae-Jean Kim; Niranjan Kissoon; Sanjeev Krishna; Krutika Kuppalli; Arthur Kwizera; Marta Lado Castro-Rial; Thiago Lisboa; Rakesh Lodha; Imelda Mahaka; Hela Manai; Marc Mendelson; Giovanni Battista Migliori; Greta Mino; Emmanuel Nsutebu; Jacobus Preller; Natalia Pshenichnaya; Nida Qadir; Pryanka Relan; Saniya Sabzwari; Rohit Sarin; Manu Shankar-Hari; Michael Sharland; Yinzhong Shen; Shalini Sri Ranganathan; Joao P Souza; Miriam Stegemann; Ronald Swanstrom; Sebastian Ugarte; Tim Uyeki; Sridhar Venkatapuram; Dubula Vuyiseka; Ananda Wijewickrama; Lien Tran; Dena Zeraatkar; Jessica J Bartoszko; Long Ge; Romina Brignardello-Petersen; Andrew Owen; Gordon Guyatt; Janet Diaz; Leticia Kawano-Dourado; Michael Jacobs; Per Olav Vandvik Journal: BMJ Date: 2020-09-04
Authors: Elie A Akl; Rebecca L Morgan; Andrew A Rooney; Brandiese Beverly; Srinivasa Vittal Katikireddi; Arnav Agarwal; Brian S Alper; Carlos Alva-Diaz; Laura Amato; Mohammed T Ansari; Jan Brozek; Derek K Chu; Philipp Dahm; Andrea J Darzi; Maicon Falavigna; Gerald Gartlehner; Hector Pardo-Hernandez; Valerie King; Jitka Klugarová; M W Miranda Langendam; Craig Lockwood; Manoj Mammen; Alexander G Mathioudakis; Michael McCaul; Joerg J Meerpohl; Silvia Minozzi; Reem A Mustafa; Francesco Nonino; Thomas Piggott; Amir Qaseem; John Riva; Rachel Rodin; Nigar Sekercioglu; Nicole Skoetz; Gregory Traversy; Kris Thayer; Holger Schünemann Journal: J Clin Epidemiol Date: 2020-09-30 Impact factor: 7.407
Authors: Reed Ac Siemieniuk; Jessica J Bartoszko; Dena Zeraatkar; Elena Kum; Anila Qasim; Juan Pablo Díaz Martinez; Ariel Izcovich; Francois Lamontagne; Mi Ah Han; Arnav Agarwal; Thomas Agoritsas; Maria Azab; Gonzalo Bravo; Derek K Chu; Rachel Couban; Tahira Devji; Zaira Escamilla; Farid Foroutan; Ya Gao; Long Ge; Maryam Ghadimi; Diane Heels-Ansdell; Kimia Honarmand; Liangying Hou; Quazi Ibrahim; Assem Khamis; Bonnie Lam; Christian Mansilla; Mark Loeb; Anna Miroshnychenko; Maura Marcucci; Shelley L McLeod; Sharhzad Motaghi; Srinivas Murthy; Reem A Mustafa; Hector Pardo-Hernandez; Gabriel Rada; Yamna Rizwan; Pakeezah Saadat; Charlotte Switzer; Lehana Thabane; George Tomlinson; Per O Vandvik; Robin Wm Vernooij; Andrés Viteri-García; Ying Wang; Liang Yao; Yunli Zhao; Gordon H Guyatt; Romina Brignardello-Petersen Journal: BMJ Date: 2020-07-30
Authors: Armen Yuri Gasparyan; Ainur B Kumar; Marlen Yessirkepov; Olena Zimba; Bekaidar Nurmashev; George D Kitas Journal: J Korean Med Sci Date: 2022-06-06 Impact factor: 5.354