Lucie Perillat1, Brian S Baigrie2,3. 1. Faculty of Arts and Science, University of Toronto, Toronto, Ontario, Canada. 2. Institute for the History and Philosophy of Science and Technology, University of Toronto, Toronto, Ontario, Canada. 3. Institute of Health Policy, Management and Evaluation, Dalla Lana School of Public Health, University of Toronto, Toronto, Ontario, Canada.
Abstract
RATIONALE, AIMS, AND OBJECTIVES: One of the sectors challenged by the COVID-19 pandemic is medical research. COVID-19 originates from a novel coronavirus (SARS-CoV-2) and the scientific community is faced with the daunting task of creating a novel model for this pandemic or, in other words, creating novel science. This paper is the first part of a series of two papers that explore the intricate relationship between the different challenges that have hindered biomedical research and the generation of scientific knowledge during the COVID-19 pandemic. METHODS: During the early stages of the pandemic, research conducted on hydroxychloroquine (HCQ) was chaotic and sparked several heated debates with respect to the scientific methods used and the quality of knowledge generated. Research on HCQ is used as a case study in both papers. The authors explored biomedical databases, peer-reviewed journals, pre-print servers, and media articles to identify relevant literature on HCQ and COVID-19, and examined philosophical perspectives on medical research in the context of this pandemic and previous global health challenges. RESULTS: This paper demonstrates that a lack of prioritization among research questions and therapeutics was responsible for the duplication of clinical trials and the dispersion of precious resources. Study designs, aimed at minimising biases and increasing objectivity, were, instead, the subject of fruitless oppositions. The duplication of research works, combined with poor-quality research, has greatly contributed to slowing down the creation of novel scientific knowledge. CONCLUSIONS: The COVID-19 pandemic presented challenges in terms of (1) finding and prioritising relevant research questions and (2) choosing study designs that are appropriate for a time of emergency.
RATIONALE, AIMS, AND OBJECTIVES: One of the sectors challenged by the COVID-19 pandemic is medical research. COVID-19 originates from a novel coronavirus (SARS-CoV-2) and the scientific community is faced with the daunting task of creating a novel model for this pandemic or, in other words, creating novel science. This paper is the first part of a series of two papers that explore the intricate relationship between the different challenges that have hindered biomedical research and the generation of scientific knowledge during the COVID-19 pandemic. METHODS: During the early stages of the pandemic, research conducted on hydroxychloroquine (HCQ) was chaotic and sparked several heated debates with respect to the scientific methods used and the quality of knowledge generated. Research on HCQ is used as a case study in both papers. The authors explored biomedical databases, peer-reviewed journals, pre-print servers, and media articles to identify relevant literature on HCQ and COVID-19, and examined philosophical perspectives on medical research in the context of this pandemic and previous global health challenges. RESULTS: This paper demonstrates that a lack of prioritization among research questions and therapeutics was responsible for the duplication of clinical trials and the dispersion of precious resources. Study designs, aimed at minimising biases and increasing objectivity, were, instead, the subject of fruitless oppositions. The duplication of research works, combined with poor-quality research, has greatly contributed to slowing down the creation of novel scientific knowledge. CONCLUSIONS: The COVID-19 pandemic presented challenges in terms of (1) finding and prioritising relevant research questions and (2) choosing study designs that are appropriate for a time of emergency.
On March 11, 2020, what was first described as cases of pneumonia of unknown cause originating from Wuhan was labelled as a pandemic by the WHO. As the COVID‐19 pandemic progresses, nations and supranational organizations face a multitude of challenges that impact every facet of society. One of the sectors challenged by the COVID‐19 pandemic—and the focus of this paper—is medical research. COVID‐19 originates from a novel coronavirus (SARS‐CoV‐2) and the scientific community has been faced with the daunting task of creating a novel model for the COVID‐19 pandemic or, in other words, creating novel science (ie, knowledge that is unexpected in light of received scientific opinion).
Since January 2020, researchers have attempted to uncover the origin of the virus and its mechanism of replication and have rapidly developed diagnostic tools and a number of vaccine candidates that are still under review.Building on the experiences of past epidemics—specifically, the 2014‐2016 Ebola outbreak—the WHO identified research as an ethical imperative
and an essential part of the response to health emergencies.
,
Research is equally essential for generating a knowledge base about the present pandemic, as well as future global health challenges. This newfound consciousness for what is now characterized as “epidemic preparedness” can be traced back to the 2003 SARS outbreak. The last Ebola epidemic represented a turning point in epidemic preparedness efforts: following this outbreak, the pace at which policymakers and academics developed tools to address the next health emergency increased exponentially.
The creation of the term ‘Disease X’, representing the threat of a pandemic caused by a currently unknown pathogen, as well as the development of initiatives, such as the WHO R&D Blueprint
and the Coalition for Epidemic Preparedness Innovations (CEPI), are evidence that there has, indeed, been a shift in consciousness. While the COVID‐19 pandemic response has evidently been informed and shaped by these inter‐epidemic efforts, generating valuable scientific knowledge during an emergency remains a challenge. In this context, one candidate drug, hydroxychloroquine (HCQ), sparked particular interest among members of the scientific community and will serve as a case study for this paper. HCQ is an antimalarial drug, whose toxicity profile is well‐known for approved conditions, such as rheumatoid arthritis and systemic lupus erythematosus. It was thought that HCQ could inhibit the pH‐dependent steps of SARS‐CoV‐2 replication.
So far, and especially during the early stages of the pandemic, research conducted on this drug was chaotic and sparked several heated debates with respect to the scientific methods used and the quality of knowledge generated.In the context of uncertainty and urgency associated with the COVID‐19 pandemic, two strategies, which Angus
calls “exploitation” and “exploration”, seem to be in tension with one another.
According to Angus, “exploitation refers to acting on current knowledge, habits, or beliefs despite uncertainty. This is the ‘just do it’ option: give various therapies (eg, chloroquine) to affected patients based on current knowledge or a hunch. Exploration refers to actions taken to generate new knowledge and reduce uncertainty, for example, testing therapies in an RCT. This is the ‘must learn’ option. Currently, these approaches are framed as a choice: do something (treat the patient) or learn something (test the drug).”
(p.1895) In other words, some prefer to take action quickly despite uncertainty, while others choose to wait for robust evidence before taking any action. The effort to find the right balance between the two underlies most of the challenges that have hindered medical research. Different authors have started to identify some of these challenges, such as patient inclusion in clinical trials,
data sharing,
publication ethics,
and research waste.
These articles, which tend to be short columns or editorials, typically focus on very specific issues. However, since these challenges tend to compound each other, it is also enlightening to look at them from a broader perspective and examine their intricate relationships.In the context of COVID‐19 and medical research, the question at hand is the generation of a valuable and actionable body of novel scientific knowledge in a relatively short timeframe. The literature suggests that a set of specific issues often complicate the generation of knowledge, regardless of whether research is conducted in a time of emergency or not. These issues are:While undoubtedly there are several ways to examine this issue, this paper will take the position that at least four elements are needed to generate valuable scientific knowledge: (1) relevant research questions, (2) adequate and rigorous study designs, and appropriate ways to (3) evaluate, and (4) report newly acquired knowledge. This first part of a larger study will examine challenges presented by the COVID‐19 pandemic in terms of the first two elements. A follow‐up paper will turn to the third and fourth elements.Inappropriate research questions and study designs. Chalmers and Glasziou
argue that “choosing the wrong questions for research”
(p.86) and “doing studies that are unnecessary, or poorly designed”
(p.87) result in research waste (ie, scientific knowledge that does not have practical value or is not translated into practice).Data collection and sharing. Data collection and sharing presented a challenge during the H1N5 outbreak in Indonesia, the 2015 Zika outbreak and the 2014‐2016 Ebola outbreak, among others.
PRIORITISING RELEVANT RESEARCH QUESTIONS
Ensuring that research questions are relevant to the COVID‐19 response is the first step in the generation of valuable knowledge. Without relevant research questions (including the identification of appropriate populations, interventions and outcomes), actionable findings cannot be generated. The difficulty lies in that it is often difficult to identify a clear, testable and relevant hypothesis at the beginning of an outbreak, when information about the pathogen is scarce and fleeting.
Prioritising relevant research questions during a pandemic is crucial and can be justified on ethical and practical considerations.
Ethical and practical justifications
Following the 2014‐2016 Ebola outbreak, seven principles guiding research during health emergencies were identified. These principles are outlined in the National Academy of Medicine (NAM) report
: (1) scientific and social value, (2) respect for persons, (3) community engagement, (4) concern for participant welfare and interests, (5) favourable risk‐benefit balance, (6) justice in the distribution of benefits and burdens, (7) post‐trial access.
The first principle, which is the one that is most pertinent to this paper, is formulated as follows: “A clinical study's value depends on the quality of the scientific information produced and the relevance of the information to addressing public health or clinical issues”**.
(p.477) However, the criteria to determine whether a study has more value than another remain unclear. According to the NAM report, the information produced by a trial must justify the risks and the allocation of resources and be of sufficient quality to inform decisions.
A trial must also address “an important clinical question that cannot be rapidly answered by other means”.
(p.390) However, relying on these criteria to prioritize research questions during a pandemic might be unsatisfactory since the way these criteria are to be operationalized was never explicitly outlined. A research question can be considered irrelevant if there is insufficient evidence warranting the investigation of the hypothesis or if it is already under investigation. In step with Chalmers and Glasziou,
this paper takes the position that patients and clinicians should be involved in the prioritization process so that their needs and the questions under investigation are better aligned. While this is crucial to facilitate research, how to accomplish this goal in a time of emergency has yet to be theorized, much less put into practice.From a practical perspective, and even under normal circumstances, choosing an inappropriate research question results in a waste of financial and physical resources.
The 2014‐2016 Ebola epidemic demonstrated that prioritising research questions is crucial to avoid overwhelming clinical networks.
The tension between research and care is often associated with a high cost,
and is especially salient during health emergencies.
Healthcare workers have continuously been under pressure because of the growing number of COVID‐19 patients, the risks of infections, the lack of equipment, and the pre‐existing frailties of health care systems. Until research and care become integrated, every trial runs the risk of being a burden for the healthcare system, even more so if the research question is not directly relevant to the pandemic response. Ideally, during a pandemic, funding should be available for external research teams so as to alleviate the clinical staff's workload.During the 2014‐2016 Ebola outbreak, researchers investigated a large number of therapeutics for which the evidence available was very limited. The WHO staff, research funding agencies, and ethics boards were overwhelmed by a large number of proposals. As such, the authors of the NAM Report recommended that “in the event of a rapidly progressing outbreak it is critical to create a mechanism to prioritize investigational agents for study and limit the conduct of the clinical trials to a small number of products, focusing on those with the most promising preclinical or human clinical data, in order to maximize the likelihood that meaningful results will be generated.”
(p.46) The lesson from these considerations, then, is that pursuit of irrelevant research questions can be explained by an absence of (1) research prioritization and (2) mechanisms to avoid the duplication of research works. In the next section, this paper will examine whether these two issues have been adequately addressed since the last Ebola outbreak.
Absence of prioritization
Building on experiences from past outbreaks, the 2016 WHO R&D Blueprint
recommended developing a research roadmap for each new epidemic, as well as Target Product Profiles for the corresponding pathogens. On March 12, 2020, the WHO published a Research Roadmap for COVID‐19,
which purports to identify knowledge gaps and prioritize urgent questions. The working group concluded that the following nine areas require particular attention
:Virus natural history, transmission and diagnostics,Virus origin and management measures at the human‐animal interface,Epidemiological studies,Clinical management,Infection prevention and control,Candidate therapeutics R&D,Candidate vaccines R&D,Ethics considerations for research,Social sciences in the outbreak response.This broad list covers most, if not all, research directions and does not provide any sort of ranking. Additionally, the WHO has no international jurisdiction and only provided this list as a recommendation. Regardless of whether this prioritization is considered authoritative, there seems to be no proportionality between the WHO's recommendations and research efforts since the onset of the pandemic. Indeed, the diversity of areas prioritized by the WHO has not been reflected in practice: what we have witnessed is a striking emphasis on the development of therapeutics and vaccines with surprisingly little attention given to non‐drug interventions, which, interestingly, represent the primary response to COVID‐19.
It is arguable that efforts should not be exclusively focused on pharmaceutical interventions, especially given the experiences of past epidemics (with the possible exception of smallpox) that have testified that vaccines and therapeutics represent the least promising (and the most time‐consuming) options. Research on transmission and mitigation strategies, while not as lucrative, is equally crucial for protecting populations.
Until legal bases and incentives are created to encourage a plurality of research objectives, this issue will most likely remain.To address the NAM's recommendation to limit the number of therapeutics investigated, WHO working groups started, as early as January 24, 2020, to work on therapeutics prioritization.
These working groups established a dozen criteria—preclinical efficacy in non‐human primates, safety profiles from non‐clinical studies, and quality of manufacturing as mandatory criteria
—and generated a shortlist of around 25 candidate drugs. A few months later, by April 2020, a new list included over 150 therapeutics (or combinations of therapeutics),
which appears counter‐productive with respect to their first prioritization efforts. Moreover, there is a discrepancy between what has been prioritized—and the evidence behind it—and what is being studied. As of January 16, 2021, 272 of the 2409 trials tested HCQ, whereas only 40 tested Remdesivir.
However, the WHO stated in January 2020 that “Remdesivir was considered the most promising candidate based on the broad antiviral spectrum, the in‐vitro and in‐vivo data available […] and the extensive clinical safety database.”
(p.9) One of the reasons why such a high number of studies were performed on HCQ compared to Remdesivir is the relatively low cost of (and ease of access to) HCQ. On the other hand, carrying out a study on Remdesivir involved making arrangements with Gilead. The opportunity cost is another essential determinant in defining what research questions are pursued. At first sight, HCQ was a promising candidate therapeutic given its low cost and wide availability. For the same reason, it was an attractive investigational treatment for researchers.
Duplication of research works
Redundancy in research works results in the dispersed allocation of scarce resources (studies are competing for hospital infrastructure, staff, funding, and patient base), which slows down the creation of novel scientific knowledge. This challenge is specific to the COVID‐19 pandemic since relatively few trials were conducted during past epidemics (none during the 2003 SARS outbreak,
18 during the 2014‐2016 Ebola epidemic
). Patient enrolment is challenging during a health emergency and, by limiting the number of trials, as suggested in the NAM report, the chances of enroling enough patients and reaching definite conclusions are maximized.
By the time a trial starts, the number of patients admitted to the ICU has largely decreased (due to the implementation of non‐pharmaceutical interventions), making it difficult to reach a pre‐specified sample size.
Part of the difficulty is that obtaining ethics approval takes time. During the 2003 SARS outbreak in Toronto, the 18‐day delay between the official beginning of the outbreak, and the ethics approval of the first clinical trial (which, ultimately, was not conducted), resulted in a loss of 60% of patients who could have been enroled.
To address this issue, international organizations,
,
as well as scholars,
,
developed a system of expedited ethics reviews.*** In the context of COVID‐19, the issue is compounded by the difficulties of approving and implementing multi‐site protocols, given the differences in national resources and healthcare systems. This has posed numerous challenges to the rapid launching of multi‐site trials, such as SOLIDARITY and DISCOVERY (the French‐led arm of SOLIDARITY).
By overwhelming ethics committees and regulatory bodies, the duplication of trials has exacerbated the difficulties associated with red tape that is often excessive.Mechanisms to limit the number of trials allowed to proceed have not been established since the last Ebola outbreak. While it was reasonable, at the beginning of the pandemic, to expect that HCQ would be tested as a cure, prophylaxis and in combination with other therapeutics, such a high number of studies (272) was not justified. Ideally, trial registration should provide information on what trials are in progress and de‐incentivize duplication. However, this rarely happens in practice, especially considering the strong academic and financial incentives that have been in place since the beginning of the outbreak. There is no legal basis for an international body to examine all trial proposals and determine which trials are allowed to proceed. The R&D Blueprint acknowledges this issue, stating that to avoid “unnecessary duplication […] appropriate incentives and other measures” can be implemented
(p.11). However, there is no additional information on what those incentives might be. A few platforms, such as the Trial Innovation Network, SMART IRB or the COVID19 CP, aim to create incentives and facilitators for collaboration at the clinical level. SMART IRB describes itself as: “a platform designed to ease common challenges associated with initiating multi‐site research.”
The other two platforms expedite approval for proposals that create multi‐site collaborations.
,
Unfortunately, their lack of exposure, partnering institutions, and resources explains the persistence of this issue.While the objective should be to minimize duplication of research as much as possible, it is clear that duplication is sometimes desirable, especially when the scientific community wishes to have higher confidence in research findings (through replication of studies). In the context of this pandemic, the duplication of research on vaccines is, for example, both appropriate and desirable, given the need for equitable access to vaccines that are suitable for a range of populations, settings, and storage requirements.
However, having different research groups chasing the same irrelevant research question should be avoided.While lessons learned from the last Ebola outbreak helped researchers prioritize research questions and identify candidate therapeutics, the duplication of research remains a problem. This issue is compounded by a large number of researchers who exclusively want to work on vaccine and drug development. To address these challenges, the priority is to clearly define what a “relevant” research question is and to strengthen coordination efforts. While these two steps are crucial in facilitating the generation of valuable scientific knowledge during health emergencies, it will not be sufficient unless behaviours and mindsets also change.
IDENTIFYING APPROPRIATE DESIGNS
Since the beginning of the pandemic, researchers seem to have embarked on a quest to find a “miracle” study design—but disagree on what that design should be. As such, researchers advocate for what they consider to be the best methodological approach while condemning all others. Past health emergencies have also witnessed several disputes regarding how clinical trials ought to be designed, thereby, further delaying their launching.
The 2009 H1N1 and the 2014‐2016 Ebola outbreaks revealed the need for a portfolio of designs that are best suited to a health emergency. This rise in consciousness incentivized scholars to develop new designs meant to address the various challenges engendered by a pandemic. These initiatives resulted in the development of the SOLIDARITY and REMAP‐CAP trials, launched on March 18 and April 9, 2020, respectively.
,With respect to research conducted on HCQ, the question of study designs sparked lengthy debates across the scientific community, politicians and the public alike. Most parties to this debate acknowledge that the quality of findings generated has been poor.
,
Glasziou, Sanders and Hoffman bemoan “a deluge of poor quality research [that] is sabotaging an effective evidence based response.”
(p.1) While it is outside the scope of this paper to conduct a systematic review of all the studies on HCQ and determine its efficacy, it might prove useful to outline some of the characteristics of these studies, such as the number of participants, the type of study design, the publication format and the study's conclusions and limitations. For the purpose of this paper, studies on the efficacy of HCQ as a treatment (not prophylaxis) published between January and July 17, 2020 were selected (35 studies). By July 17, the general consensus was that HCQ would not be an effective treatment for COVID‐19
(most trials, including the WHO SOLIDARITY trial,
had removed their HCQ arm and emergency authorizations were revoked
). These characteristics are summarized in Appendix A and shed some light on the research conducted since the beginning of the COVID‐19 pandemic:Lack of methods transparency: the most striking example is Gao and colleagues' letter of declaration of results.Limitations and biases: all 35 studies have been widely criticized and considered methodologically very biased by several commentators.
,
,
,
,
,Inconsistent results: 13 studies show benefits of HCQ,
,
,
,
,
,
,
,
,
,
,
,
,
18 report no significant benefit
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
,
and four report increased risks.
,
,
,Significant number of retracted studies: three studies on HCQ were retracted
,
,
and another was the subject of a statement of concern from Elsevier and is currently under investigation.Large amount of non‐peer‐reviewed articles: only 15 of these studies were peer‐reviewed.Lack of adverse event reporting: 17 studies did not formally report adverse events, which is, however, one of the major concerns clinicians have when prescribing HCQ. Indeed, HCQ is known to cause renal, hepatic and cardiovascular adverse effects.
COVID‐19 patients in the ICU are more likely to have co‐morbidities (including renal, hepatic and cardiovascular dysfunctions) and be given high doses of HCQ, thereby increasing the risks of adverse events. Starting in July 2020, studies tended to report adverse events more systematically.This paper will now analyse the three major strategies that have been suggested since the beginning of the pandemic (traditional designs, Big Data and REMAP‐CAP) to determine if a “miracle” design can, indeed, be found.
Traditional designs
A vocal constituency of the scientific community advocates for maintaining the standards and methods generally used outside of health emergencies.
Simple designs that have been frequently used in the past are thought to help preserve scientific rigour without the prospect of overwhelming the clinical network.
While the first few completed clinical trials on HCQ were received from China, the antimalarial drug was placed under the international spotlight due to two French studies that were published in March 2020.
,
Despite their methodological limitations (see Appendix A), the advertisement made by Raoult (one of the authors of these French studies) and politicians
sparked hope and controversy among the population, with the result that another team of researchers decided to replicate the study by performing a prospective case series of 11 patients. However, this study did not yield conclusive findings.
This is a typical example where researchers were obligated to replicate knowledge generated instead of building on it, thereby slowing down research efforts. At the time, Raoult claimed that a situation of emergency is a licence to abandon the scientific method and a call to action (ie, population‐wide distribution of HCQ).
As such, he agitated against RCTs as being unduly time‐consuming and contended that including placebos and control groups is unethical.
This debate can be traced back to the last Ebola outbreak
but seems to have been settled, for the most part, since then.**** Regardless, clinicians might rightfully be torn between attempting to cure patients with what is available and continuing unabated with their research. However, while Raoult referenced the Hippocratic Oath to justify giving HCQ to every patient, regardless of the risks,
it seems reasonable to respond to Raoult that this very same Hippocratic Oath (“first do no harm”) mandates against imprudent, population‐wide prescriptions of investigational drugs. This brings us to reflect on the prescription of off‐label drug use. This practice is widespread
but 73% of off‐label drugs are supported by very poor or no scientific evidence.
Prescribing off‐label drugs can also undermine research efforts since data cannot be collected on patients who are prescribed the investigational treatment.
,
The 2014‐2016 Ebola outbreak incentivized the WHO to create MEURI (Monitored Emergency Use of Unregistered Interventions), which stipulates that the distribution of investigational treatments outside of clinical trials is only allowed if the following criteria are met
:
91Lack of effective treatment,Absence of clinical trials,Availability of efficacy and safety data,Ethical approval,Implementation of risk mitigation strategies,Acquisition of patients' informed consents,Consistent monitoring of patients and sharing of results with the scientific community.Since criteria 2 and 7 are not met in the context of this pandemic, this paper takes the position, in agreement with Caplan and colleagues, that while “there may be a role for MEURI in COVID‐19, [the] unconstrained, unevaluated use of therapeutics under the guise of compassionate use or panicked rhetoric about right‐to‐try must be aggressively discouraged in order for scientists to learn what regimens or vaccines actually work”
(p.2753).Following these two politicized studies, which polarized public and scientific opinion, it became evident for many that only a RCT would provide definite conclusions regarding the efficacy of HCQ. However, the preliminary results of the UK RECOVERY trial (an adaptive factorial trial
,
,
), released on June 5, 2020, suggested that a RCT is far from bringing all the answers. The DMC stated that the interim results, based on 4674 patients, revealed “no beneficial effects of hydroxychloroquine” and that they had decided to “stop enrolling participants to the hydroxychloroquine arm […] with immediate effect”.
Nevertheless, researchers were quick to criticize these findings, even though they came from a RCT that, supposedly, ranks high in the evidence hierarchy.
The first concern regarding the trial was related to the unusually high dosage of HCQ. Indeed, patients received 2400 mg of HCQ in the first 24 hours, which is well above the dosage recommended by the FDA on the Emergency Use Authorization (800 mg).
While this dosage decision is explained in the protocol on the basis of available data of the IC50 for SARS‐CoV‐2 (how much substance in needed in plasma to inhibit the virus by 50%),*****
it remains unclear whether this dosage was warranted and did not pose an unreasonable risk to patients. The population of patients selected was also questioned. Some argued that patients who received HCQ in this trial (mainly severely ill patients) would not benefit from receiving the treatment. This is because COVID‐19 is a three‐stage disease with an initial viral replication phase, followed by a pulmonary phase and then a “cytokine storm” causing tissue damage (when patients are in the ICU).
Giving HCQ, an antiviral, would, thus, only be beneficial for patients who were still in the early stages of the disease. This example shows that results from a seemingly well designed, large RCT can be criticized because the trial's hypothesis is not relevant given the available evidence. This consideration reiterates the importance of a relevant question: regardless of the type of study design, if the research question (population and intervention, in the case of RECOVERY) is not appropriate, then research findings will not be generalizable to the intended target population.
Big data and electronic health records
Angus describes the advantages of using Big Data and Electronic Health Records (EHRs) in clinical research as follows: “The information is relatively inexpensive, generated as a by‐product of patient care (overcoming the cost problem), and both specific to individuals (ie, adequately narrow) and, en masse, descriptive of the entire delivery system (ie, adequately broad). No individuals are randomized, so the ethical issues appear less complex. The richness and immediacy of these new data could allow tailored treatment decisions in real time, overcoming delays in knowledge translation.”
(p.767) Such an approach was used in an observational study published by Mehra and colleagues on May 22, 2020. This study on 96 031 patients concluded that HCQ was associated with a higher risk of mortality and cardiac arrhythmia.
Immediately following this publication, guidelines on the use of HCQ changed dramatically: on May 25, 2020, the WHO suspended all HCQ arms and national trials followed in its path.
The French Minister of Health suspended the authorization he had exceptionally issued on the use of HCQ in the clinical setting.
The large sample size, which is often—incorrectly—associated with high‐quality findings, was used as justification to make these decisions. However, concerns about the study data were quickly raised, first on Twitter,
and then in an open letter addressed to The Lancet, which outlined 10 concerns, including discrepancies with government data and inadequate statistical adjustments.
Three out of the four authors retracted the study on June 5, 2020,
which led the WHO and national policymakers to resume clinical trials.
In addition to the negative consequences that these contradicting decisions might have had on clinical trials, this study also contributed to making COVID‐19 patients following a HCQ treatment even more concerned about their vital prognosis.Thus, generating data quickly is not helpful if the data collection methods are inappropriate or if the data only supports limited conclusions, which clearly was the case of the data collected by the COVID‐19 4CE Consortium.
Using patient‐level data, if collected adequately and internationally, would yield more generalizable findings than those that are currently available. Nevertheless, advocating for this approach seems to overlook the numerous challenges that remain to be addressed. First, the question of patient privacy and re‐identification is often seen as a significant barrier to the sharing of EHRs.
Besides, there are currently no incentives to share clinical data since there is no mechanism for academic recognition and data ownership.
,
,
Researchers might prefer to wait until they have conclusions to publish rather than share their raw data.
REMAP‐CAP
The third approach, advocated by Angus
and others,
,
is to choose an adaptive design that is “pre‐planned, pre‐approved and practiced”
(p.12) during the inter‐epidemic period. Such a design, REMAP‐CAP, which stands for Randomized, Embedded (into clinical care), Multifactorial, Adaptive, Platform trial for Community‐Acquired Pneumonia (CAP), was developed following the H1N1 outbreak.
Following the approval of the core protocol and the pandemic appendix, the trial was launched in 2016 and, as of January 16, 2021, includes 290 sites in 19 countries.
Enrolment of COVID‐19 patients started soon after the beginning of the pandemic. This design combines elements from a platform trial, upon which multiple research questions can be investigated, and an adaptive trial, which allows for design modifications based on a Bayesian analysis of interim results.
,
,
,
,
We are told that this design addresses a “disease or condition, rather than a particular intervention”
(p.797), which can be helpful when investigating emerging pathogens, such as SARS‐CoV‐2. The different adaptations used in REMAP‐CAP are
:Enrichment: population modifications are made if the treatment proves to be more efficient on a subset of the population. According to Angus, this allows for a “precision medicine” approach and a better estimation of the intervention's effects on individual patients.Treatment arms: addition or termination of arms based on interim results and simulations.Patient allocation: Response‐Adaptive Randomization (RAR) allows participants to become more likely to be enroled in the more promising arm as evidence accumulate.According to Angus, REMAP trials are most adequate to accommodate the complex web of constraints that a pandemic generates.
RAR tends to shorten the time required to generate conclusive findings and to decrease the number of participants needed,
,
which minimizes challenges around patient inclusion. By allowing more participants into the more promising arm, RAR might also be a partial response to clinicians' concerns about randomization.
The organization of the protocol (core protocol and appendices for each new arm) also facilitates ethics approval.However, these designs also have practical and statistical limitations.****** An adaptive trial requires a tremendous amount of pre‐trial planning and simulations in order to pre‐specify the statistical methods and algorithms used to evaluate interim results.
Given the uncertainty associated with any pandemic, one can question how much of the trial can actually be planned ahead of time. RAR, while admittedly more intuitive, can also result in a population drift.
Participants know that the later they enter the trial, the more chances they have of being allocated to the more promising arm. As such, patients who enrol later are more likely to be healthier since they can afford to wait.
Finally, REMAP‐CAP, as any multi‐site trial, risks having different “standard of care” practices across sites, due to socio‐economic differences. A lack of mechanisms for data harmonization makes it difficult to compare data and generalize results.
“Standard of care” guidelines for COVID‐19 patients might also change over time, as new evidence arises.While a REMAP design should not be considered flawless, it seems to adequately address some of the challenges imposed by a pandemic, provided, of course, that it is conducted properly. Given the amount of pre‐trial planning required before launching an adaptive trial, it must be designed before the onset of an outbreak, which is why REMAP‐CAP has an advantage over other adaptive trials. If the results live up to expectations, REMAP‐CAP will show that aiming for a personalized approach to medicine and a learning healthcare system is possible, even during a pandemic.
,
However, results and data from the HCQ arm of this trial have yet to be released,
making it impossible to assert with certainty that it is the most appropriate design for an emergency.As demonstrated above, all three approaches have very distinct justifications. Advocates of traditional designs value studies conducted at the bedside that do not overwhelm the clinical staff. Relying on EHRs is often considered as less time and resource‐consuming. Finally, advocates of the REMAP trial highlight the ethical and practical benefits of removing less promising treatment arms. RECOVERY and REMAP‐CAP (two large, randomized trials) also endorse different values. The rationale behind RECOVERY is to conduct “the simplest [trial] as possible”: healthcare systems should not be further overwhelmed by a complex protocol.
REMAP‐CAP, on the other hand, and while claiming that the trial is embedded into clinical care, offers a very complex protocol and has yet to show how this integration between research and care works.During the inter‐epidemic period, the question of which design is preferable has been addressed but discussions have resulted in very few definite answers. In the context of the COVID‐19 pandemic, members of the scientific community have widely divergent views on what they consider to be the most appropriate design during health emergencies.
Given the above considerations, the inescapable conclusion is that the quest for a single, perfect design is futile. Instead, in order to ensure a better alignment between information clinicians and policymakers need and information that is generated by research, two objectives should be pursued: maintaining scientific rigour while embracing a methodological pluralism stipulating that the value of a plurality of designs is its prospect for the acceleration of the generation of scientific knowledge.
DISCUSSION
This first paper has demonstrated that the COVID‐19 pandemic has presented challenges to the generation of novel scientific knowledge in terms of (1) finding and prioritising relevant research questions and (2) choosing study designs that are appropriate for a time of emergency. First, a lack of prioritization among research questions and candidate therapeutics, in part at least, has been responsible for the duplication of research works and the dispersion of scarce resources. Because research questions have not always matched the needs of clinicians and policymakers, it is critical that the end‐users of research become more actively engaged in the identification of relevant research questions.
The duplication of research works, combined with poor‐quality research, has greatly contributed to slowing down the creation of novel scientific knowledge. Efforts remain to be made in at least two areas: (1) finding mechanisms to limit the number of candidate therapeutics being investigated and the number of trials allowed to proceed and (2) facilitating collaboration by creating platforms with more exposure and resources. With respect to study designs, this paper has demonstrated that the scientific community embarked on a quest to find the most appropriate design during a time of emergency fraught with danger to the public. Issues raised during previous health emergencies (around patient inclusion, randomization and trial adaptability in light of new findings) has led to the creation of interesting designs, such as the REMAP‐CAP trial. However, in the context of the COVID‐19 pandemic (and, specifically, research on HCQ), the choice of study designs has been the subject of fruitless oppositions. These oppositions, as well as the overall low methodological quality of studies on HCQ, suggest that methodological rigour and the notion of design complementarity have sometimes been abandoned.
POSTSCRIPT
In a follow‐up paper, we will continue to explore the relationship between the different challenges that have hindered biomedical research and the generation of novel scientific knowledge during the COVID‐19 pandemic. In this second paper, we will turn to the challenges presented by the COVID‐19 pandemic in terms of (3) evaluating evidence for the purpose of making evidence‐based decisions and (4) sharing scientific findings with the rest of the scientific community and the general public. In a time where confusion, uncertainty and fear rule, and where mitigation strategies rely on people's adherence to science‐based guidelines, this second paper will demonstrate the importance of communicating scientific findings, and their limitations, in a clear and transparent manner.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
AUTHOR CONTRIBUTIONS
Lucie Perillat: Conceptualization, Investigation, Writing – original draft preparation, Writing – review and editing. Brian Baigrie: Conceptualization, Writing – review and editing.Appendix S1 A: Characteristics of the major studies published on the efficacy of HCQ on COVID‐19 patients before June 17, 2020. The studies that were retracted or are under investigation are shown in yellow. The criteria used to determine whether the study systematically investigated adverse events was whether adverse events were listed as a primary or secondary outcome. Studies that investigated the role of HCQ among a population of patients with a disease other than COVID‐19 (eg, patients with rheumatic diseases) were excluded. Studies that exclusively investigated adverse events of HCQ are not included in this table. The limitations included in the table are those commonly found in the literature
,
,
,
,
,
and those that were identified by the authors of this paper.Click here for additional data file.
Authors: Karen M Meagher; Nathan W Cummins; Adil E Bharucha; Andrew D Badley; Linda L Chlan; R Scott Wright Journal: Mayo Clin Proc Date: 2020-04-24 Impact factor: 7.616
Authors: Sebastian E Sattui; Jean W Liew; Elizabeth R Graef; Ariella Coler-Reilly; Francis Berenbaum; Alí Duarte-García; Carly Harrison; Maximilian F Konig; Peter Korsten; Michael S Putman; Philip C Robinson; Emily Sirotich; Manuel F Ugarte-Gil; Kate Webb; Kristen J Young; Alfred H J Kim; Jeffrey A Sparks Journal: Expert Rev Clin Immunol Date: 2020-08-11 Impact factor: 4.473
Authors: Joshua Geleris; Yifei Sun; Jonathan Platt; Jason Zucker; Matthew Baldwin; George Hripcsak; Angelena Labella; Daniel K Manson; Christine Kubin; R Graham Barr; Magdalena E Sobieszczyk; Neil W Schluger Journal: N Engl J Med Date: 2020-05-07 Impact factor: 91.245
Authors: Paul Elias Alexander; Victoria Borg Debono; Manoj J Mammen; Alfonso Iorio; Komal Aryal; Dianna Deng; Eva Brocard; Waleed Alhazzani Journal: J Clin Epidemiol Date: 2020-04-21 Impact factor: 6.437