BACKGROUND: The ultimate goal of a phase III randomized clinical trial designed to demonstrate superiority of a new versus standard therapy is to provide sufficiently compelling evidence to affect clinical practice. To balance patient interests against the need for acquiring evidence it is desirable to stop a study for inefficacy as soon as convincing evidence that the new therapy is not beneficial becomes available. PURPOSE: To discuss potential deficiencies in some commonly used inefficacy monitoring rules and to propose a comprehensive inefficacy monitoring procedure. METHODS: The proposed approach is developed using clinical, logistical, and statistical considerations. The new approach is compared to the commonly used inefficacy rules in a simulation study. RESULTS: Some of the commonly used inefficacy rules are suboptimal with respect to the strength of evidence required for stopping throughout the trial: too conservative in the middle and/or too aggressive at the end. Our approach allows timely stopping (a) if the new therapy is harmful, and (b) if the interim data provides convincing evidence that the new therapy has no tangible benefit. Relative to common inefficacy rules, our procedure is shown to result in potentially fewer treated patients and shorter study duration under the null hypothesis with only a minor loss of power under the alternative hypothesis. LIMITATIONS: The proposed procedure is applicable to superiority designs with well-defined clinical objectives. CONCLUSIONS: The proposed inefficacy approach is attractive from statistical, clinical, and logistical standpoints. By decreasing average stopping times relative to the commonly used boundaries, our rule lessens patient exposure to inactive treatments, improves resource utilization, and accelerates dissemination of important clinical information. At the same time, the proposed rule provides a clear benchmark for providing compelling evidence that the new therapy is not beneficial. Clinical Trials 2010; 7: 197-208. http://ctj.sagepub.com.
BACKGROUND: The ultimate goal of a phase III randomized clinical trial designed to demonstrate superiority of a new versus standard therapy is to provide sufficiently compelling evidence to affect clinical practice. To balance patient interests against the need for acquiring evidence it is desirable to stop a study for inefficacy as soon as convincing evidence that the new therapy is not beneficial becomes available. PURPOSE: To discuss potential deficiencies in some commonly used inefficacy monitoring rules and to propose a comprehensive inefficacy monitoring procedure. METHODS: The proposed approach is developed using clinical, logistical, and statistical considerations. The new approach is compared to the commonly used inefficacy rules in a simulation study. RESULTS: Some of the commonly used inefficacy rules are suboptimal with respect to the strength of evidence required for stopping throughout the trial: too conservative in the middle and/or too aggressive at the end. Our approach allows timely stopping (a) if the new therapy is harmful, and (b) if the interim data provides convincing evidence that the new therapy has no tangible benefit. Relative to common inefficacy rules, our procedure is shown to result in potentially fewer treated patients and shorter study duration under the null hypothesis with only a minor loss of power under the alternative hypothesis. LIMITATIONS: The proposed procedure is applicable to superiority designs with well-defined clinical objectives. CONCLUSIONS: The proposed inefficacy approach is attractive from statistical, clinical, and logistical standpoints. By decreasing average stopping times relative to the commonly used boundaries, our rule lessens patient exposure to inactive treatments, improves resource utilization, and accelerates dissemination of important clinical information. At the same time, the proposed rule provides a clear benchmark for providing compelling evidence that the new therapy is not beneficial. Clinical Trials 2010; 7: 197-208. http://ctj.sagepub.com.
Authors: S D Silberstein; D W Dodick; A S Lindblad; K Holroyd; M Harrington; N T Mathew; D Hirtz Journal: Neurology Date: 2012-02-29 Impact factor: 9.910
Authors: Helena Earl; Louise Hiller; Anne-Laure Vallier; Shrushma Loi; Karen McAdam; Luke Hughes-Davies; Daniel Rea; Donna Howe; Kerry Raynes; Helen B Higgins; Maggie Wilcox; Chris Plummer; Betania Mahler-Araujo; Elena Provenzano; Anita Chhabra; Sophie Gasson; Claire Balmer; Jean E Abraham; Carlos Caldas; Peter Hall; Bethany Shinkins; Christopher McCabe; Claire Hulme; David Miles; Andrew M Wardley; David A Cameron; Janet A Dunn Journal: Health Technol Assess Date: 2020-08 Impact factor: 4.014
Authors: Qiang Zhang; Boris Freidlin; Edward L Korn; Susan Halabi; Sumithra Mandrekar; James J Dignam Journal: Clin Trials Date: 2016-09-22 Impact factor: 2.486
Authors: Jeff M Michalski; Jennifer Moughan; James Purdy; Walter Bosch; Deborah W Bruner; Jean-Paul Bahary; Harold Lau; Marie Duclos; Matthew Parliament; Gerard Morton; Daniel Hamstra; Michael Seider; Michael I Lock; Malti Patel; Hiram Gay; Eric Vigneault; Kathryn Winter; Howard Sandler Journal: JAMA Oncol Date: 2018-06-14 Impact factor: 31.777
Authors: G Michael Felker; Tariq Ahmad; Kevin J Anstrom; Kirkwood F Adams; Lawton S Cooper; Justin A Ezekowitz; Mona Fiuzat; Nancy Houston-Miller; James L Januzzi; Eric S Leifer; Daniel B Mark; Patrice Desvigne-Nickens; Gayle Paynter; Ileana L Piña; David J Whellan; Christopher M O'Connor Journal: JACC Heart Fail Date: 2014-09-03 Impact factor: 12.035