| Literature DB >> 33115456 |
Abstract
The problem of wasteful clinical trials has been debated relentlessly in the medical community. To a significant extent, it is attributed to redundant trials - studies that are carried out to address questions, which can be answered satisfactorily on the basis of existing knowledge and accessible evidence from prior research. This article presents the first evaluation of the potential of the EU Clinical Trials Regulation 536/2014, which entered into force in 2014 but is expected to become applicable at the end of 2021, to prevent such trials. Having reviewed provisions related to the trial authorisation, we propose how certain regulatory requirements for the assessment of trial applications can and should be interpreted and applied by national research ethics committees and other relevant authorities in order to avoid redundant trials and, most importantly, preclude the unnecessary recruitment of trial participants and their unjustified exposure to health risks.Entities:
Keywords: Clinical trials; EU clinical trials regulation; Research ethics; Research ethics committees; Research redundancy; Systematic review; Trial authorisation; Trial methodology
Mesh:
Year: 2020 PMID: 33115456 PMCID: PMC7592564 DOI: 10.1186/s12910-020-00536-9
Source DB: PubMed Journal: BMC Med Ethics ISSN: 1472-6939 Impact factor: 2.652
An overview of studies measuring the scale of redundancy in RCTs
| Study | Objective | Method | Sample | Results | Conclusions |
|---|---|---|---|---|---|
| Lau et al. (1992) [ | To demonstrate that ‘searching and monitoring the clinical literature and performing cumulative meta-analyses can […] supply practitioners and policy makers with up-to-date information on emerging and established [medical] advances’ ([ | Cumulative meta-analyses of clinical trials that evaluated 15 treatments and preventive measures for acute myocardial infarction. | Trials conducted between 1959 and 1988 that investigated the use of intravenous streptokinase as thrombolytic therapy for acute infarction. | A consistent, statistically significant reduction in total mortality was achieved in 1973 upon the completion of eight trials involving 2432 patients; 25 subsequent trials, in which 34,542 patients were enrolled, had little or no effect on the odds ratio establishing efficacy. | Clinical trials are ‘part of a continuum, and those that have gone before must be considered when new ones are planned’ ([ |
| Fergusson et al. (2005) [ | To evaluate the impact of systematic reviews of RCTs on the design of subsequent trials. | Cumulative meta-analyses of all RCTs of aprotinin using placebo controls or no active control treatment. Parameters of collected data included the study primary outcomes, objectives, the presence of a systematic review as a part of the background and/or rationale for the study, the number of previously published RCTs cited. | All RCTs of aprotinin conducted between 1987 and 2002 reporting an endpoint of perioperative transfusion. | 64 RCTs meeting the selection criteria were identified with the median trial size ranging between 20 and 1784 trial participants. A cumulative meta-analysis showed that aprotinin significantly decreased the need for perioperative transfusion, stabilizing at an odds ratio of 0.25 by the 12th study published in 1992. Thereafter, the upper limit of the confidence interval did not exceed 0.65 and results were similar in all subgroups. Citation of previous RCTs was low – on average, only 20% of relevant prior trials was cited. Only 7 of 44 of subsequent reports referenced the largest trial, which was 28 times larger than the median trial size. | Investigators evaluating aprotinin ‘were not adequately citing previous research, resulting in a large number of RCTs being conducted to address efficacy questions that prior trials had already definitively answered’ ([ |
| Cooper, Jones, Sutton (2005) [ | To assess the extent, to which Cochrane systematic reviews are taken into account in the design of new trials. | A survey among authors of published studies added in the updated Cochrane reviews. Authors were asked if they had used the 1996 Cochrane or other reviews in designing their trials. | All studies included in the 2002 and 2003 updates of the 1996 Cochrane review (overall, 33 Cochrane reviews). | Of 32 authors of eligible studies newly included in the updated Cochrane reviews, 24 responded. Eleven respondents were aware of the relevant Cochrane review at the time of designing the study. In eight cases the design of the new study had been influenced by a review; in two this was the relevant Cochrane review. | Cochrane or other systematic reviews are used in the designing of new studies to a rather limited extent ([ |
| Goudie et al. (2010) [ | To define the extent to which previous trials were considered in the design of new trials (eg in the calculation of the sample size). | The assessment of a sample of RCTs to establish whether authors considered previous trials when designing own trials. | 27 RCTs published in the leading medical journals in 2007. | Only a small fraction of the trials in the analysed sample referenced the relevant meta-analyses and related the results of the trial to previous research. | Previous evidence from trials ‘is not used (or not reported to be used) as extensively as it could be in justifying, designing, and reporting RCTs’ ([ |
| Robinson and Goodman (2011) [ | To evaluate the extent, to which the reports of RCTs cite prior trials addressing the same interventions. | Meta-analyses published in 2004 that combined four or more trials were identified; within each meta-analysis, the extent to which each trial report cited the trials that preceded it by more than one year was assessed. | 227 meta-analyses comprising 1523 trials across various health care disciplines published from 1963 to 2004. | Less than 25% of the eligible prior RCTs was cited. The percentage of ‘ignored RCTs [was] increasing as the number of those RCTs increased, [while] the proportion of trials citing no prior evidence stayed constant as the evidence accumulated’ ([ | Further research is needed to explore the explanations for and consequences of the under-citation of earlier research. ‘Potential implications [of under-citation] include ethically unjustifiable trials, wasted resources, incorrect conclusions, and unnecessary risks for trial participants’ ([ |
| Ker et al. (2012) [ | To assess the effect of tranexamic acid on blood transfusion, thromboembolic events, and mortality in surgical patients. | Systematic review and meta-analysis. | RCTs comparing tranexamic acid with no tranexamic acid or placebo in surgical patients. 129 trials, totalling 10,488 patients, carried out between 1972 and 2011 were included. | ‘A statistically significant effect of tranexamic acid on blood transfusion was first observed after publication of the third trial in 1993. Although subsequent trials have increased the precision of the point estimate, no substantive change has occurred in the direction or magnitude of the treatment effect.’ ([ | ‘Reliable evidence that tranexamic acid reduces blood transfusion in surgical patients has been available for many years. […] those planning further placebo controlled trials should … focus their efforts on resolving the uncertainties about the effect of tranexamic acid on thromboembolic events and mortality.’ ([ |
| Jones et al. (2013) [ | To examine how systematic reviews of earlier trials had been used to inform the design of new RCTs. | Review of RCTs with regard to the following parameters: the justification of treatment comparison, choice of frequency or dose, selection (or definition) of an outcome, recruitment and consent rates, sample size (margin of equivalence or non-inferiority, size of difference, control group event rate, measure of variability and loss to follow up adjustment), length of follow-up, withdrawals, missing data or adverse events. | The documentation related to RCTs funded under the National Institute for Health Research Health Technology Assessment programme in the UK in 2006, 2007 and 2008 and included applications for funding and project descriptions of 48 RCTs. | About half of the examined applications for funding in fact used the cited review in order to inform the trial design, in particular, the selection and definition of the outcomes, the calculation of the sample size and the duration of follow up. | Guidelines for applicants and funders were proposed as to how systematic reviews can be used to optimise the design and planning of new RCTs. |
| Clarke, Brice, Chalmers (2014) [ | To provide ‘the most comprehensive collection of cumulative meta-analysis of studies of healthcare interventions’, and to explore that cumulative evidence in the context of unnecessary duplication of research efforts ([ | A systematic review of the findings of cumulative meta-analyses of all studies examining effects of clinical interventions published between 1992 and 2012 and accessible through PubMed, MEDLINE, EMBASE, the Cochrane Methodology Register and Science Citation Index. | 50 eligible reports including over 1500 cumulative meta-analyses. | Four cumulative meta-analyses have shown ‘how replications have challenged initially favourable results where the early trials were favourable but not statistically significant’ ([ Two cumulative meta-analyses have shown ‘how replications have sometimes challenged initially unfavourable results’ (ibid). 22 cumulative meta-analyses demonstrated that ‘a systematic review of existing research would have reduced uncertainty about an intervention’ (ibid). Some trials were ‘much too small’ to resolve uncertainties exposed by the cumulative meta-analyses (ibid). | ‘… had researchers assessed systematically what was already known, some beneficial and harmful effects of treatments could have been identified earlier and might have prevented the conduct of the new trials. This would have led to the earlier uptake of effective health and social care interventions in practice, less exposure of trial participants to less effective treatments, and reduced waste resulting from unjustified research.’ ([ |
| Habre et al. (2014) [ | To examine the effect of a 2000 systematic review of interventions preventing pain from propofol injection (the Picard review), which provided a clear research agenda, on the design of subsequent trials; to examine whether the designs of trials that cited the 2000 review differed from those that did not cite it; to establish whether the number of new trials published each year had decreased. | A comparison of the characteristics and design of trials published before and after the 2000 Picard review, which questioned the necessity to conduct further trials to identify another analgesic intervention to prevent pain from propofol injection. Parameters under comparison included blinding methods, the inclusion of children population, the use of the known most efficacious intervention as a comparator. | All RCTs investigating interventions to prevent pain from propofol injection in humans conducted and published after the Picard review. | 136 new trial were conducted after the systematic review had questioned the necessity to conduct new studies. Only 36.0% of new trials could be considered to be clinically relevant as they used the most efficacious intervention as comparator or included a paediatric population as recommended by the review. | The impact of the Picard systematic review on the design of subsequent research was low. The number of trials published per year had not decreased; the most efficacious intervention was used only marginally. |
| Clayton et al. (2015) [ | To summarise the current use of evidence synthesis in trial design and analysis; to capture opinions of trialists and methodologists on such use, and to understand potential barriers. | A survey collecting the views and experiences on the use of evidence synthesis in trial design and analysis. | 638 participants of the International Clinical Trials Methodology Conference. | The response rate was only 17%. Respondents acknowledge that they had not been ‘using evidence syntheses as often as they felt they should’ [ | Further research and training on how to synthesise and incorporate results from earlier trials can help ‘ensure the best use of relevant external evidence in the design, conduct and analysis of clinical trials’ ([ |
| Tierney et al. (2015) [ | To identify the impact of individual patient data (IPD) meta-analyses on subsequent research in terms of the selection of comparators and participants, sample size calculations, analysis and interpretation of subsequent trials, as well as the conduct and analysis of ongoing trials. | Potential examples of the impact of IPD meta-analyses on trials were identified at an international workshop, attended by individuals with experience in the conduct of IPD meta-analyses and knowledge of trials in their respective clinical areas. Relevant trial protocols, publications, and Web sites were examined to verify the impacts of the IPD meta-analyses. | 52 examples of IPD meta-analyses thought to have had a direct impact on the design or conduct of subsequent trials. | After screening relevant trial protocols and publications, 28 instances where IPD meta-analyses had clearly impacted on trials were identified. They have influenced the selection of comparators and participants, sample size calculations, analysis and interpretation of subsequent trials, and the conduct and analysis of ongoing trials, sometimes in ways that would not possible with systematic reviews of aggregate data. Additional potential ways of how IPD meta-analyses could be used to influence trials were identified in the course of the analysis. | IPD meta-analysis ‘could be better used to inform the design, conduct, analysis, and interpretation of trials’ ([ |
| Storz-Pfennig (2016) [ | To identify and estimate the extent, to which potentially unnecessary clinical trials in major clinical areas might have been conducted. | A cumulative meta-analysis and trial sequential analysis of a sample of Cochrane collaboration systematic reviews were conducted to determine, at what point evidence was found sufficient to reach a reliable conclusion. Trials published thereafter were considered potentially unnecessary and, therefore, wasteful. Sensitivity analysis was conducted in order to identify whether the findings could be explained by a delayed perception of published findings when planning new trials. | 13 comparisons in major medical fields including cardiovascular disease, depression, dementia, leukemia, lung cancer. | In eight out of 13 comparisons, meta-analysis detected potentially unnecessary research with the range between 12 and 89% of all participants in trials that might not have been needed. In three of these cases with high proportions (69–89%) of potentially unnecessary research, this finding was found unchanged upon the sensitivity analysis. | ‘The reasonableness of claims to relevance of additional trials needs to be much more carefully evaluated in the future. Cumulative, information size bases analysis might be included in systematic reviews. Research policies to prevent unnecessary research from being done need to be developed.’ ([ |
| De Meulemeester et al. (2018) [ | To test the hypothesis that the majority of a sample of recently published RCTs would not explicitly incorporate the scientific criterion of addressing a persisting uncertainty established through a systematic review. | Cross-sectional analysis of all RCTs published in the | 208 RCT articles and 199 protocols met the inclusion criteria. | The majority of RCTs (56%) did not meet the criteria of having a clear hypothesis and demonstrating that an uncertainty around that hypothesis exists, being established through a systematic review. | RCTs not meeting the criteria of having a clear hypothesis and demonstrating that an uncertainty around that hypothesis exists being established through a systematic review can be scientifically and therefore ethically unjustified. Authors recommend to replace the criteria of “equipoise,” “clinical equipoise,” and “lack of consensus” with the requirement that RCTs have a clearly stated, meaningful hypothesis around which uncertainty has been established through a systematic review of the literature. |
| Blanco-Silvente et al. (2019) [ | To examine the strength of the available evidence on efficacy, safety and acceptability of cholinesterase inhibitors and memantine for Alzheimer’s disease (AD); To determine the number of redundant trials following the authorisation of cholinesterase inhibitors (ChEI) and memantine used as current pharmacological treatments for Alzheimer’s disease. | A cumulative meta-analysis with a trial sequential analysis, whereby the primary outcomes were cognitive function assessed with ADAS-cog or SIB scales, discontinuation due to adverse events and discontinuation for any reason. The redundancy of post-authorisation clinical trials was studied by determining the novel aspects of each study on patient, intervention, comparator and trial outcome characteristics. Two criteria of trial futility - lenient and strict – were used. | A total of 63 randomised clinical trials (RCTs) (16,576 patients) including placebo-controlled, double-blind, parallel-design RCTs with a minimum duration of 12 weeks that had investigated the effects of donepezil, galantamine, rivastigmine or memantine in monotherapy or in combination with a ChEI at the doses approved by the Food and Drug Administration or the European Medicine Agency in patients with Alzheimer’s disease. | It was conclusive that neither ChEI nor memantine achieved clinically significant improvement in cognitive function. In relation to safety, there was sufficient evidence to conclude that donepezil caused a clinically relevant increase on dropouts due to adverse events whereas the evidence was inconclusive for the remaining interventions. Regarding acceptability, it was conclusive that no ChEI improved treatment discontinuation while it was uncertain for memantine. The proportion of redundant trials was 5.6% with the lenient criteria and 42.6% with the strict criteria. | The evidence showed conclusively that neither ChEI nor memantine achieve clinically significant symptomatic improvement in Alzheimer’s disease, and that the acceptability of ChEI is unsatisfactory. Although evidence on the safety of pharmacological interventions for AD and acceptability of memantine is inconclusive, no further RCTs are needed as their efficacy is not clinically relevant. Redundant trials were identified but their number depends on the criteria of futility used. |
| Walters et al. (2020) [ | To determine to what extent systematic reviews were cited as justification for conducting phase III trials published in high impact journals. | The analysis of all phase III RCTs published between 1 January 2016 and 31 August 2018 in | 665 RCTs were retrieved, of which 637 were included that cited in total 728 systematic reviews. | Less than 7% of the analysed RCTs published in three high impact general medicine journals cited explicitly a systematic review as the basis for undertaking the trial. | Trialists should be required to present relevant systematic reviews to ethics committees demonstrating that the existing evidence for the research question is insufficient. Elimination of research waste is both scientific and ethical responsibility. |
An overview of the provisions under the EU Clinical Trials Regulation related to the justification of a clinical trial in light of the prior research
| Provisions under the EU Clinical Trials Regulation | Text of the regulatory provisions (emphasis added) | Aspects that give a leeway for interpretation and the potential to reduce redundancy |
|---|---|---|
| Article 6 (1)(b)(i) second indent | An | •The notions of ‘trial relevance’ and ‘the current state of scientific knowledge’ are broad and can be subject to diverging interpretations. •‘Relevance’ of a trial might or might not be interpreted as the |
| Article 25 (1)(a) | The | •The notion of ‘scientific context’ is subject to interpretation, especially as far as the scope is concerned. •Systematic reviews of prior studies and critical analysis of the existing evidence are not explicitly required. |
| Article 2 (23) | The | •The requirement concerns data on the experience not only with the investigational medicinal product but also |
| Annex I(E)(25) | Investigator’s brochure has to be prepared in accordance with the state of scientific knowledge and international guidance. | •The criterion ‘in accordance with the state of scientific knowledge’ is of general nature. For instance, a trial can be designed in accordance with the principles and rules of medical statistics – yet, the research question that it intends to address may lack clinical relevance |
| Annex I (E)(27) | The information in the investigator’s brochure shall be presented in a concise, simple, objective, balanced and non-promotional form that enables a clinician or investigator to understand it and make an unbiased risk-benefit assessment of the | •The notion of ‘appropriateness’ in conjunction with the ‘trial rationale’ can be interpreted as the requirement to show that the study intends to resolve a persisting clinical uncertainty that can justify the risks and costs involved. •The requirement to base the rationale on ‘all available information and evidence’ presupposes extensive search on the part of investigators. |
| Annex I (G)(46),(47) | The | •The requirement concerns data on the experience only |
| Annex I (D)(17)(c) | The | •Notably, the scope of earlier evidence, which has to be taken into consideration, extends to other trials that can be relevant for the proposed study. •Only references and summaries of findings from previous studies are required to be submitted. Neither systematic reviews, nor critical assessment of earlier studies, nor the explanation of how they informed the design of a proposed trial are explicitly required. •The notion of ‘relevant’ literature and data that form the scientific background can be interpreted expansively. |
| Annex I (D)(17)(d) | The trial protocol shall include a summary of the known and potential risks and benefits including an evaluation of the anticipated benefits and risks to allow assessment in accordance with Article 6 | |
| Annex I (D)(17)(i) | The trial protocol shall include references to literature and data that are |