Literature DB >> 33535775

GLASS(Y) Half-Full: Moving Towards Greater Pragmatism in Outcome Ascertainment for Clinical Trials.

Sanket S Dhruva1.   

Abstract

Entities:  

Keywords:  Editorials; clinical trials; myocardial infarction; pragmatic clinical trial

Mesh:

Year:  2021        PMID: 33535775      PMCID: PMC7887105          DOI: 10.1161/CIRCOUTCOMES.120.007690

Source DB:  PubMed          Journal:  Circ Cardiovasc Qual Outcomes        ISSN: 1941-7713


× No keyword cloud information.
See Article by Randomized clinical trials (RCTs) are the widely accepted gold standard for the rigorous evaluation of a clinical intervention. To date, most RCTs of drugs and medical devices have been traditional, explanatory trials: intended to give the intervention its best chance to demonstrate efficacy. However, there are multiple well-understood limitations of these traditional RCTs: limited generalizability because of highly selected populations and trial environments, challenges in patient enrollment, high costs, and administrative complexity. To address these concerns, there has been increasing interest in conducting clinical trials with more pragmatic elements. Pragmatic trials may be defined as those conducted in real-world, usual-care settings, with the goal of providing evidence to inform whether an intervention should be delivered in clinical practice.[1] Of course, no trial is either completely traditional or completely pragmatic; a continuum exists between these 2 archetypes.[2] The emphasis on augmenting the number of pragmatic elements in RCTs of drugs has been propelled by the 21st Century Cures Act of 2016, which tasked the US Food and Drug Administration with creating a framework for the use of real-world evidence to support regulatory decision-making. This Framework, published in December 2018, specifically describes the potential to use pragmatic or hybrid (ie, combination of both pragmatic and traditional) trial designs to generate real-world evidence. The implications of these shifting paradigms in clinical trial design and implementation need to be better understood. Although greater use of pragmatic elements can address many limitations of traditional RCTs, could this shift compromise the integrity of some trial results? Cardiovascular RCTs have become more pragmatic in both primary outcomes and follow-up in recent years,[3] but can clinical investigators be relied upon for accurate outcome reporting? Like other elements of pragmatic trials, the certainty and consistency of outcome ascertainment by clinical investigators lies on a continuum: mortality is unmistakable, and objective healthcare utilization (such as an emergency room visit or hospitalization) is almost always consistently ascertained. However, other outcomes, such as myocardial infarction, may be consistently ascertained less often, and those with more subjective elements (such as unstable angina) even less commonly. Questions about accuracy with investigator reporting are compounded by the lack of blinding in routine care, which is considered an attribute of pragmatism in RCTs. These outcomes may be more likely to benefit from formal clinical event committees (CECs), an explanatory component to RCTs, for harmonized event ascertainment. However, CECs are resource-intensive and costly. Therefore, the extent to which clinical investigator-reported events can be relied upon for ascertainment of clinical outcomes requires study. The study by Leonardi et al[4] in this issue of Circulation: Cardiovascular Quality and Outcomes provides helpful information about the accuracy of end point ascertainment through reliance on investigator-reported outcomes. Within the Limus Eluted From A Durable Versus Erodable Stent Coating trial, a multicenter, international RCT comparing ticagrelor and aspirin for 1 month followed by ticagrelor alone for 23 months with standard dual antiplatelet therapy for 12 months followed by 12 months of aspirin alone, the researchers conducted the GLASSY (GLOBAL LEADERS Adjudication Sub-Study) at the top 20 enrolling sites. In GLASSY, a formal CEC adjudicated investigator-reported end points as well as various end point triggers from electronic case report forms. Leonardi et al[4] determined that CEC adjudication was feasible for examination of 4 clinical events: myocardial infarction, bleeding, stroke, and stent thrombosis. More than 98% of the triggers (either investigator-reported or through the electronic case report form) for these events could be adjudicated. However, when using CEC adjudication as the gold standard, investigator-reported events had limited global diagnostic accuracy, ranging from 59% (95% CI, 52%–66%) for stent thrombosis to 77% (95% CI, 75%–79%) for bleeding. The inaccuracy of investigator-reported events when compared with the CEC stemmed primarily from low negative predictive values. A substantial minority, ≈18%, of the total outcome events in GLASSY were unreported by investigators and identified only through electronic case report form triggers and CEC adjudication. Positive predictive values were higher, ranging from 75% for stent thrombosis to 91% for bleeding. Overall, these results show both (1) large numbers of outcome events unreported by investigators and (2) smaller, but important, numbers of outcome events reported by investigators but that did not meet CEC criteria. The generally weak concordance with CEC findings suggests that CEC adjudication has important benefits for improving the accuracy of RCT results over investigator-reported events for clinical end points. Specifically, the imprecision of effect estimates introduced through investigator reporting could obscure the true effect of the intervention. Trials may inaccurately meet noninferiority criteria or inaccurately fail to meet superiority criteria. A Cochrane meta-analysis of RCTs studying the benefit of adjudication committees found that treatment effect estimates assessed onsite versus by adjudication committees were nearly identical, but CECs had the greatest utility when onsite investigators were unblinded and there was a high risk of misclassification.[5] Although CECs provide a crucial role in event adjudication, the implementation of several strategies could bring greater confidence in investigator-reported event ascertainment and, therefore, support more successful pragmatic RCT designs. First, local investigators should receive additional, ongoing training and support in outcome reporting. In GLASSY, the event definitions were available in the protocol, discussed at investigator meetings, and at site initiation and then remote assistance was available through a dedicated hotline. Although these are excellent informational mechanisms, investigators across heterogeneous settings (130 sites across 18 countries in GLOBAL LEADERS) will understandably have different standards for ascertaining clinical events. A handful of meetings is insufficient to make these standards as uniform as CEC definitions. One solution would be the implementation of midstream evaluation by CECs, such as after adjudication of a prespecified number of investigator-reported outcomes. This real-time, iterative feedback could help to improve accuracy of investigator-reported outcomes over time, which would be particularly helpful over the course of a trial with 2-years follow-up like GLOBAL LEADERS and for future RCTs that use the common outcomes of myocardial infarction, stroke, and bleeding. Such feedback, however ideal, is unlikely to be successful or well-received without the second needed strategy: additional investment in engaging clinicians and health system leadership in pragmatic trials. Overwhelmingly busy clinical schedules mean that, with the exception of some clinicians within academic centers, few can be successfully engaged in RCTs.[1] Over the past year, the massive disruptions and stresses from the coronavirus disease 2019 (COVID-19) pandemic have compounded these challenges. Therefore, clinicians need incentives for the additional time and effort required for their participation in conducting high-quality RCTs. This engagement includes clinical event ascertainment and reporting based on common definitions (as in GLOBAL LEADERS), as well as other key steps in pragmatic RCTs, such as identification of potential study patients and delivery of the study intervention. Without support to better align research with clinical care, the notoriously nonstop pressures of healthcare delivery will continue precluding meaningful clinician engagement. Therefore, the third and larger needed strategy is additional investment in the Learning Healthcare System described by the National Academy of Medicine—one focused on rigorous testing (through RCTs, when possible) of clinical interventions during the course of routine care. These systems have been developed in specific locations, such as Sweden’s national registries and the United Kingdom’s clinical trial units, to provide the backbone for RCTs with pragmatic elements. Such an evidence generation system has been proposed in the United States by leaders of federal health and healthcare agencies.[6] Reusable clinical trial infrastructures mean that learning by local clinical investigators would carry through to future studies, thereby strengthening the ability to pragmatically ascertain clinical outcomes over time. Fortunately, novel clinical trial efforts that seek to advance this promise of pragmatism will provide guidance about if and how to depend upon investigators in routine clinical practice for event reporting. The ADAPTABLE trial (Aspirin Dosing: A Patient-centric Trial Assessing Benefits and Long-Term Effectiveness) relies on clinician-reported data (through a common data model, health plan, and claims data) and patient-reported outcomes.[7] For studies in cancer, the minimal Common Oncology Data Elements initiative seeks to use electronic health record data for outcome ascertainment in a manner that is concordant with an electronic case report form.[8] Similar efforts in cardiovascular medicine could significantly enhance confidence in specific investigator-reported outcomes. Linkage of RCTs to administrative claims has also shown promise in establishing parameters under which routinely collected data can be used for accurate ascertainment of some clinical outcomes.[9] Finally, future efforts must place patients at the center. Patient engagement through digital health technologies holds promise in having people contribute their electronic health record, patient-reported (eg, symptoms), and patient-generated (eg, from wearable networked devices) data.[10] Geofencing data from smartphones can also provide information about hospitalizations and length of stay.[11] Triangulation of these data sources with investigator-reported events could strengthen confidence in event ascertainment. In summary, the GLASSY investigators demonstrated that CEC adjudication significantly enhanced the quality and accuracy of outcome ascertainment compared to investigator-reported outcomes in a large, international RCT. The glass remains half-full for supporting greater pragmatism in outcome ascertainment over time through several ongoing efforts and strategies, which could herald important advances for the clinical trial enterprise—allowing more efficient generation of robust RCT evidence to guide clinical practice.

Disclosures

Dr Dhruva receives research funding from the National Heart, Lung, and BloodInstitute (NHLBI, K12HL138046) of the National Institutes of Health (NIH), Food and Drug Administration, National Evaluation System for health Technology Coordinating Center (NESTcc), Greenwall Foundation, and Arnold Ventures.
  10 in total

1.  The PRECIS-2 tool: designing trials that are fit for purpose.

Authors:  Kirsty Loudon; Shaun Treweek; Frank Sullivan; Peter Donnan; Kevin E Thorpe; Merrick Zwarenstein
Journal:  BMJ       Date:  2015-05-08

2.  Transforming Evidence Generation to Support Health and Health Care Decisions.

Authors:  Robert M Califf; Melissa A Robb; Andrew B Bindman; Josephine P Briggs; Francis S Collins; Patrick H Conway; Trinka S Coster; Francesca E Cunningham; Nancy De Lew; Karen B DeSalvo; Christine Dymek; Victor J Dzau; Rachael L Fleurence; Richard G Frank; J Michael Gaziano; Petra Kaufmann; Michael Lauer; Peter W Marks; J Michael McGinnis; Chesley Richards; Joe V Selby; David J Shulkin; Jeffrey Shuren; Andrew M Slavitt; Scott R Smith; B Vindell Washington; P Jon White; Janet Woodcock; Jonathan Woodson; Rachel E Sherman
Journal:  N Engl J Med       Date:  2016-12-15       Impact factor: 91.245

3.  Comparison of Investigator-Reported and Clinical Event Committee-Adjudicated Outcome Events in GLASSY.

Authors:  Sergio Leonardi; Mattia Branca; Anna Franzone; Eugene McFadden; Raffaele Piccolo; Peter Jüni; Pascal Vranckx; Philippe Gabriel Steg; Patrick W Serruys; Edouard Benit; Christoph Liebetrau; Luc Janssens; Maurizio Ferrario; Aleksander Zurakowski; Roberto Diletti; Marcello Dominici; Kurt Huber; Ton Slagboom; Pawel Buszman; Leonardo Bolognese; Carlo Tumscitz; Krzysztof Bryniarski; Adel Aminian; Mathias Vrolix; Ivo Petrov; Scot Garg; Cristoph Naber; Janusz Prokopczuk; Christian Hamm; Dik Heg; Stephan Windecker; Marco Valgimigli
Journal:  Circ Cardiovasc Qual Outcomes       Date:  2021-02-04

Review 4.  Comparison of central adjudication of outcomes and onsite outcome assessment on treatment effect estimates.

Authors:  Lee Aymar Ndounga Diakou; Ludovic Trinquart; Asbjørn Hróbjartsson; Caroline Barnes; Amelie Yavchitz; Philippe Ravaud; Isabelle Boutron
Journal:  Cochrane Database Syst Rev       Date:  2016-03-10

5.  The electronic health record as a clinical trials tool: Opportunities and challenges.

Authors:  Monica M Bertagnolli; Brian Anderson; Andre Quina; Steven Piantadosi
Journal:  Clin Trials       Date:  2020-04-08       Impact factor: 2.486

6.  Use of Mobile Health Applications in Low-Income Populations: A Prospective Study of Facilitators and Barriers.

Authors:  Patrick Liu; Katia Astudillo; Damaris Velez; Lauren Kelley; Darcey Cobbs-Lomax; Erica S Spatz
Journal:  Circ Cardiovasc Qual Outcomes       Date:  2020-09-04

Review 7.  Trends in the Explanatory or Pragmatic Nature of Cardiovascular Clinical Trials Over 2 Decades.

Authors:  Nariman Sepehrvand; Wendimagegn Alemayehu; Debraj Das; Arjun K Gupta; Pishoy Gouda; Anukul Ghimire; Amy X Du; Sanaz Hatami; Hazal E Babadagli; Sanam Verma; Zakariya Kashour; Justin A Ezekowitz
Journal:  JAMA Cardiol       Date:  2019-11-01       Impact factor: 14.676

8.  Use of Administrative Claims to Assess Outcomes and Treatment Effect in Randomized Clinical Trials for Transcatheter Aortic Valve Replacement: Findings From the EXTEND Study.

Authors:  Jordan B Strom; Kamil F Faridi; Neel M Butala; Yuansong Zhao; Hector Tamez; Linda R Valsdottir; J Matthew Brennan; Changyu Shen; Jeffrey J Popma; Dhruv S Kazi; Robert W Yeh
Journal:  Circulation       Date:  2020-05-21       Impact factor: 29.690

9.  Rationale and Design of the Aspirin Dosing-A Patient-Centric Trial Assessing Benefits and Long-term Effectiveness (ADAPTABLE) Trial.

Authors:  Guillaume Marquis-Gravel; Matthew T Roe; Holly R Robertson; Robert A Harrington; Michael J Pencina; Lisa G Berdan; Bradley G Hammill; Madelaine Faulkner; Daniel Muñoz; Gregg C Fonarow; Brahmajee K Nallamothu; Dan J Fintel; Daniel E Ford; Li Zhou; Sarah E Daugherty; Elizabeth Nauman; Jennifer Kraschnewski; Faraz S Ahmad; Catherine P Benziger; Kevin Haynes; J Greg Merritt; Thomas Metkus; Sunil Kripalani; Kamal Gupta; Raj C Shah; James C McClay; Richard N Re; Carol Geary; Brent C Lampert; Steven M Bradley; Sandeep K Jain; Hani Seifein; Jeff Whittle; Véronique L Roger; Mark B Effron; Giselle Alvarado; Ythan H Goldberg; Jeffrey L VanWormer; Saket Girotra; Peter Farrehi; Kathleen M McTigue; Russell Rothman; Adrian F Hernandez; W Schuyler Jones
Journal:  JAMA Cardiol       Date:  2020-05-01       Impact factor: 14.676

10.  Pragmatic Trials.

Authors:  Ian Ford; John Norrie
Journal:  N Engl J Med       Date:  2016-08-04       Impact factor: 91.245

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.