Literature DB >> 27884148

A framework: make it useful to guide and improve practice of clinical trial design in smaller populations.

Kit C B Roes1.   

Abstract

The increased attention to design and analysis of randomised clinical trials in small populations has triggered thinking regarding the most appropriate design methods for a particular clinical research question. Decision schemes and algorithms have been proposed, with varying starting points and foci. Parmar et al. (BMC Medicine 14:183, 2016) proposed a framework designed to assist the clinical trial team in design choices during protocol preparation. Herein, further stimulus is given regarding the extent to which a framework may help change practice for the better, the careful considerations for changing the usual error levels applied and the room for innovation in clinical trial design.Please see related article: http://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-016-0722-3 .

Entities:  

Keywords:  Clinical trial design; Protocol development; Randomised clinical trials; Rare diseases

Mesh:

Year:  2016        PMID: 27884148      PMCID: PMC5123315          DOI: 10.1186/s12916-016-0752-x

Source DB:  PubMed          Journal:  BMC Med        ISSN: 1741-7015            Impact factor:   8.775


Background

The increased attention to design and analysis of randomised clinical trials in small populations has triggered thinking regarding the processes leading to the most appropriate design for a particular clinical research question. In common diseases, this might not seem a pressing problem, given the extensive practice- and theory-based experience in trial designs. In the context of drug development for rare diseases, guidance from the European Medicines Agency [1] states that “in conditions with small and very small populations, less conventional and/or less commonly seen methodological approaches may be acceptable if they help to improve the interpretability of the study results”. However, this advice does not provide practical guidance on how such choices can be made at the clinical trial designing stage. Moreover, it also states that “[n]o methods exist that are relevant to small studies that are not also applicable to large studies” [1]. Hence, such practical guidance is actually relevant for all trials. Thus, there is arguably a need for a design framework, with that proposed by Parmar, Sydes and Morris [2] having particularly strong points. Their framework follows a logical order of the steps one would take in designing a trial, allowing for practical implementation. Secondly, it is driven by what could be termed a ‘step-down approach’: at each step following a non-feasible option, the next potential change or comprise to be considered is the one with minimal impact on the objective of obtaining high quality randomised evidence to improve care for the target patient population. It also appropriately addresses the fact that designing a clinical trial is a complex multidisciplinary and multifaceted exercise, not easily captured in a simple decision scheme. Frameworks (or even algorithms) for applying particular designs for randomised clinical trials in small populations have been previously proposed, notably by Gupta et al. [3] and Cornu et al. [4], both of which are based on a literature search up to 2010, a particular choice of decision ‘nodes’ and considerations of the pros and cons of (less familiar) designs. The decision nodes are driven by the type of intervention [3], type of outcome versus recruitment time [3, 4], feasibility of sample size [3], prior knowledge and treatment alternatives [3], and certain design desirabilities [4]. Further, they address minimising time on placebo, and/or ensuring that all participants are on active treatment at the end of the trial [4]. In these frameworks, it is actually difficult to ascertain whether a particular choice is (in some sense) the best possible given the circumstances. Indeed, the focus of Parmar et al.’s [2] framework on the ‘best’ randomised evidence to improve care for patients makes the search for this ‘best possible’ far more explicit. We concur that this requires the time and adequate attention of the entire clinical research team. If application of a framework helps to concentrate the team in order to design the best possible trial, this is, in itself, a positive effect not to be underestimated, particularly for investigator-initiated trials. Herein, the proposed framework and its application are further considered, focusing on the level of ambition for its practical application, a discussion on relaxing type I or II errors, and the methods through which a deeper understanding of novel trial designs may be obtained.

Frameworks as a starting point or to change practice for the better

In several instances, Parmar et al. [2] refer to standard practice as ‘traditional’ with an undoubtedly positive connotation. Rightfully so, many current practices in trial design are thoroughly founded on theory as well as extensive practical application. However, that does not hold for all framework aspects, and not all aspects improve clinical trial statistical efficiency. It is important to consider how the framework is best positioned for application. If it aims to stay as close possible to current practice (‘traditional’) to stimulate its use, the proposed ordering is acceptable. However, in clinical trial methodology there are a number of current practices that we (as statisticians) know are not optimal, but which are difficult to change in real life. One could argue that application of a framework is an opportunity to influence less optimal practices. Therefore, this could lead to some changes within the framework. Two approaches labelled as ‘less common’ are (1) including covariate information and (2) moving from two- to one-sided significance tests; this really seems a missed opportunity to influence practice. Regarding the inclusion of covariate information, it is (by now) well accepted that including relevant prognostic covariates into the primary analysis will most likely increase power, and should be considered for any trial at the initial design stage. However, it remains at the ‘recommendation for improvement’ stage in small population trials [5]. On the same note, stratification of the randomisation is usually considered for every trial. For small populations, there are clear limits to the amount of ‘traditional’ stratification that can be performed. Both stratification and inclusion of covariate information should therefore be considered in concert for any trial at an early stage [6], which can be carried out fairly independently of the other trial features. It is beyond the scope of this commentary (and maybe also beyond the author’s competence) to fully cover the discussion on one- or two-sided testing. What can be noted, however, is that group sequential and adaptive clinical trials can only be appropriately designed and understood with one-sided testing [7]. Given the widespread use of these flexible designs it could be concluded that this debate has effectively ended, as long as the ‘standard’ one-sided α-level is 2.5%. Other choices (ie moving to 5% one-sided) would then fall under a relaxation of the α-level.

Relaxing power, relaxing the α-level

Parmar et al. [2] provide a thoughtful discussion on carefully relaxing the power or α, which is a particularly strong point in their framework. Relaxing the power and α-level strongly relates to the (theoretical) reproducibility of the trial. Society expects that research results are reliable, with even greater pressure when vulnerable patients have contributed to the research. Expressed concerns on reliability of research in general, and medical research in particular, have raised awareness on the rigor of design that is required [8]. One of the areas for improvement indicated is striking the right balance of clinically relevant outcomes, power and α-level. A careful approach, particularly in small populations, seems to be on the right direction.

Deeper understanding of novel trial designs

Novel design and analysis approaches for small populations are expected to be beneficial in clinical research aimed at improving care for patients [9, 10]. A deeper understanding of the properties of such designs needs research as well as lessons learned from actual application. Our current deep level of understanding of the ins and outs of clinical trial design has only been reached because of a continuous cycle of improvement between theory development and real-life application. Hence, there is long-term benefit – beyond the level of an individual trial – of ‘road testing’ novel design features that hold strong promise of improvement based on theory. It will be worthwhile to consider whether application of a framework, as proposed, retains sufficient opportunity to experiment with trial design.
  6 in total

1.  Controversies concerning randomization and additivity in clinical trials.

Authors:  Stephen Senn
Journal:  Stat Med       Date:  2004-12-30       Impact factor: 2.373

Review 2.  A framework for applying unfamiliar trial designs in studies of rare diseases.

Authors:  Samir Gupta; Marie E Faughnan; George A Tomlinson; Ahmed M Bayoumi
Journal:  J Clin Epidemiol       Date:  2011-05-06       Impact factor: 6.437

3.  Evaluation of experiments with adaptive interim analyses.

Authors:  P Bauer; K Köhne
Journal:  Biometrics       Date:  1994-12       Impact factor: 2.571

4.  Increasing value and reducing waste in research design, conduct, and analysis.

Authors:  John P A Ioannidis; Sander Greenland; Mark A Hlatky; Muin J Khoury; Malcolm R Macleod; David Moher; Kenneth F Schulz; Robert Tibshirani
Journal:  Lancet       Date:  2014-01-08       Impact factor: 79.321

5.  Directions for new developments on statistical design and analysis of small population group trials.

Authors:  Ralf-Dieter Hilgers; Kit Roes; Nigel Stallard
Journal:  Orphanet J Rare Dis       Date:  2016-06-14       Impact factor: 4.123

Review 6.  Experimental designs for small randomised clinical trials: an algorithm for choice.

Authors:  Catherine Cornu; Behrouz Kassai; Roland Fisch; Catherine Chiron; Corinne Alberti; Renzo Guerrini; Anna Rosati; Gerard Pons; Harm Tiddens; Sylvie Chabaud; Daan Caudri; Clément Ballot; Polina Kurbatova; Anne-Charlotte Castellan; Agathe Bajard; Patrice Nony; Leon Aarons; Agathe Bajard; Clément Ballot; Yves Bertrand; Frank Bretz; Daan Caudri; Charlotte Castellan; Sylvie Chabaud; Catherine Cornu; Frank Dufour; Cornelia Dunger-Baldauf; Jean-Marc Dupont; Roland Fisch; Renzo Guerrini; Vincent Jullien; Behrouz Kassaï; Patrice Nony; Kayode Ogungbenro; David Pérol; Gérard Pons; Harm Tiddens; Anna Rosati; Corinne Alberti; Catherine Chiron; Polina Kurbatova; Rima Nabbout
Journal:  Orphanet J Rare Dis       Date:  2013-03-25       Impact factor: 4.123

  6 in total
  2 in total

Review 1.  A systematic literature review of evidence-based clinical practice for rare diseases: what are the perceived and real barriers for improving the evidence and how can they be overcome?

Authors:  Ana Rath; Valérie Salamon; Sandra Peixoto; Virginie Hivert; Martine Laville; Berenice Segrestin; Edmund A M Neugebauer; Michaela Eikermann; Vittorio Bertele; Silvio Garattini; Jørn Wetterslev; Rita Banzi; Janus C Jakobsen; Snezana Djurisic; Christine Kubiak; Jacques Demotes-Mainard; Christian Gluud
Journal:  Trials       Date:  2017-11-22       Impact factor: 2.279

2.  Evidence supporting regulatory-decision making on orphan medicinal products authorisation in Europe: methodological uncertainties.

Authors:  Caridad Pontes; Juan Manuel Fontanet; Roser Vives; Aranzazu Sancho; Mònica Gómez-Valent; José Ríos; Rosa Morros; Jorge Martinalbo; Martin Posch; Armin Koch; Kit Roes; Katrien Oude Rengerink; Josep Torrent-Farnell; Ferran Torres
Journal:  Orphanet J Rare Dis       Date:  2018-11-15       Impact factor: 4.123

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.