Literature DB >> 27418432

Design and analysis choices for safety surveillance evaluations need to be tuned to the specifics of the hypothesized drug-outcome association.

Susan Gruber1,2, Aloka Chakravarty3, Susan R Heckbert4,5, Mark Levenson3, David Martin6, Jennifer C Nelson7, Bruce M Psaty5,8, Simone Pinheiro9, Christian G Reich10, Sengwee Toh2, Alexander M Walker11.   

Abstract

BACKGROUND: We reviewed the results of the Observational Medical Outcomes Research Partnership (OMOP) 2010 Experiment in hopes of finding examples where apparently well-designed drug studies repeatedly produce anomalous findings. OMOP had applied thousands of designs and design parameters to 53 drug-outcome pairs across 10 electronic data resources. Our intent was to use this repository to elucidate some sources of error in observational studies.
METHOD: From the 2010 OMOP Experiment, we sought drug-outcome-method combinations (DOMCs) that met consensus design criteria, yet repeatedly produced results contrary to expectation. We set aside DOMCs for which we could not agree on the suitability of the designs, then selected for an in-depth scrutiny one drug-outcome pair analyzed by a seemingly plausible methodological approach, whose results consistently disagreed with the a priori expectation.
RESULTS: The OMOP "all-by-all" assessment of possible DOMCs yielded many combinations that would not be chosen by researchers as actual study options. Among those that passed a first level of scrutiny, two of seven drug-outcome pairs for which there were plausible research designs had anomalous results. The use of benzodiazepines was unexpectedly associated with acute renal failure and upper gastrointestinal bleeding. We chose the latter as an example for in-depth study. The factitious appearance of a bleeding risk may have been partly driven by an excess of procedures on the first day of treatment. A risk window definition that excluded the first day largely removed the spurious association.
CONCLUSION: One cause of reproducible "error" may be repeated failure to tie design choices closely enough to the research question at hand.
Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

Entities:  

Keywords:  electronic health records; insurance claim data; medical product safety; monitoring; pharmacoepidemiology

Mesh:

Year:  2016        PMID: 27418432     DOI: 10.1002/pds.4065

Source DB:  PubMed          Journal:  Pharmacoepidemiol Drug Saf        ISSN: 1053-8569            Impact factor:   2.890


  5 in total

Review 1.  Benchmarking Observational Analyses Against Randomized Trials: a Review of Studies Assessing Propensity Score Methods.

Authors:  Shaun P Forbes; Issa J Dahabreh
Journal:  J Gen Intern Med       Date:  2020-03-19       Impact factor: 5.128

2.  Hypothesis-free screening of large administrative databases for unsuspected drug-outcome associations.

Authors:  Jesper Hallas; Shirley V Wang; Joshua J Gagne; Sebastian Schneeweiss; Nicole Pratt; Anton Pottegård
Journal:  Eur J Epidemiol       Date:  2018-03-31       Impact factor: 8.082

3.  Use of Health Care Databases to Support Supplemental Indications of Approved Medications.

Authors:  Michael Fralick; Aaron S Kesselheim; Jerry Avorn; Sebastian Schneeweiss
Journal:  JAMA Intern Med       Date:  2018-01-01       Impact factor: 21.873

4.  Signal Detection for Recently Approved Products: Adapting and Evaluating Self-Controlled Case Series Method Using a US Claims and UK Electronic Medical Records Database.

Authors:  Xiaofeng Zhou; Ian J Douglas; Rongjun Shen; Andrew Bate
Journal:  Drug Saf       Date:  2018-05       Impact factor: 5.606

5.  Prevalence of Avoidable and Bias-Inflicting Methodological Pitfalls in Real-World Studies of Medication Safety and Effectiveness.

Authors:  Katsiaryna Bykov; Elisabetta Patorno; Elvira D'Andrea; Mengdong He; Hemin Lee; Jennifer S Graff; Jessica M Franklin
Journal:  Clin Pharmacol Ther       Date:  2021-08-04       Impact factor: 6.875

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.