Literature DB >> 34865623

Comparative effectiveness and safety of pharmaceuticals assessed in observational studies compared with randomized controlled trials.

Yoon Duk Hong1, Jeroen P Jansen2,3, John Guerino4, Marc L Berger5, William Crown6, Wim G Goettsch7,8, C Daniel Mullins9, Richard J Willke10, Lucinda S Orsini11.   

Abstract

BACKGROUND: There have been ongoing efforts to understand when and how data from observational studies can be applied to clinical and regulatory decision making. The objective of this review was to assess the comparability of relative treatment effects of pharmaceuticals from observational studies and randomized controlled trials (RCTs).
METHODS: We searched PubMed and Embase for systematic literature reviews published between January 1, 1990, and January 31, 2020, that reported relative treatment effects of pharmaceuticals from both observational studies and RCTs. We extracted pooled relative effect estimates from observational studies and RCTs for each outcome, intervention-comparator, or indication assessed in the reviews. We calculated the ratio of the relative effect estimate from observational studies over that from RCTs, along with the corresponding 95% confidence interval (CI) for each pair of pooled RCT and observational study estimates, and we evaluated the consistency in relative treatment effects.
RESULTS: Thirty systematic reviews across 7 therapeutic areas were identified from the literature. We analyzed 74 pairs of pooled relative effect estimates from RCTs and observational studies from 29 reviews. There was no statistically significant difference (based on the 95% CI) in relative effect estimates between RCTs and observational studies in 79.7% of pairs. There was an extreme difference (ratio < 0.7 or > 1.43) in 43.2% of pairs, and, in 17.6% of pairs, there was a significant difference and the estimates pointed in opposite directions.
CONCLUSIONS: Overall, our review shows that while there is no significant difference in the relative risk ratios between the majority of RCTs and observational studies compared, there is significant variation in about 20% of comparisons. The source of this variation should be the subject of further inquiry to elucidate how much of the variation is due to differences in patient populations versus biased estimates arising from issues with study design or analytical/statistical methods.
© 2021. The Author(s).

Entities:  

Keywords:  Observational data; Pharmaceuticals; Real-world evidence

Mesh:

Substances:

Year:  2021        PMID: 34865623      PMCID: PMC8647453          DOI: 10.1186/s12916-021-02176-1

Source DB:  PubMed          Journal:  BMC Med        ISSN: 1741-7015            Impact factor:   8.775


Background

Health care decision makers, particularly regulators but also health technology assessment agencies, have depended upon evidence from randomized clinical trials (RCTs) to assess drug effectiveness and to make comparisons among treatment options. Widespread adoption of the RCT was the hallmark of progress in clinical research in the twentieth century and accelerated the development and approval of new therapeutics; confidence in RCTs derived from their experimental nature, designs to minimize bias, rigorous data quality, and analytic approaches that supported causal inference. In the last 30 years, we have witnessed an explosion of observational real-world data (RWD) and evidence (RWE) derived from RWD that has supplemented our understanding of the benefits and risks of treatments in broader populations of patients. RWE has been largely leveraged by regulators to assess the safety of marketed products and for new drug approvals when RCTs are infeasible, such as in rare diseases, oncology, or for long-term adverse effects. RCTs often do not have sufficient sample size to detect rare adverse events or long enough follow-up to detect long-term adverse effects. In such cases, regulatory decisions are often supplemented by RWE. However, leveraging of RWE has been much more slowly embraced in comparison to the adoption of RCTs for a variety of reasons. Imputation of causality is less certain in the absence of randomization and RWD can be much sparser and often requires extensive curation before it can be analyzed. Thus, skepticism about the robustness of observational RWD studies has made decision makers cautious in relying solely upon it to render judgments about the availability and appropriate use of new therapeutics, particularly by regulatory bodies. Moreover, observational studies examining the effectiveness of treatments in similar populations have not always provided results consistent with RCTs. Despite many studies finding similar treatment effect estimates from RCTs and RWD analyses [1-3], other analyses have documented wide variation in results from RWD analyses within the same therapeutic areas [4], including analyses using propensity score-based methods [5]. Nonetheless, public interest has grown in the routine leveraging of RWD to promote the creation of a learning healthcare system, and regulatory bodies and other decision makers are exploring ways to expand their use of RWE. This is partly due to increasing acknowledgement of the value of RWE, such as its ability to better reflect actual environments in which the interventions are used. One promising approach to understanding the sources of variability between RCT and observational study results is to compare estimates obtained from RWD analyses that attempt to emulate the eligibility criteria, endpoints, and other features of trials as closely as possible. A small number of RWD analyses have generated findings similar to previous RCTs [6, 7], and the findings of other RWD analyses have been consistent with subsequent RCTs [8]. In a small number of cases, RCTs and RWD studies have been published simultaneously [9]. This has the advantage of not knowing the RCT estimate when conducting the RWD study. There have been disagreements between observational RWD analyses and RCTs that were based upon avoidable errors in the RWD analysis design [7, 10]. This has led to a focus on the importance of research design in observational RWD analyses attempting to draw causal inferences regarding treatment effects [11-13].Emulation studies can improve understanding of when observational studies may reliably generate results consistent with RCTs; however, not all RCTs can be feasibly emulated using RWD due to limitations in observational datasets. Existing sources of observational data, such as health insurance claims and electronic health records (EHRs), may not routinely capture the intervention, indication, inclusion and exclusion criteria, and/or endpoints used in RCTs [14]. The objective of this paper is to provide further evidence on the comparability of RCTs and observational studies when the latter use a range of study designs and were not designed to emulate RCTs. We aim to quantify the extent of the difference in treatment effect estimates between RCTs and observational studies. We go beyond previous comparisons of RCTs and observational studies, with a focus purely on pharmaceuticals, and provide a systematic landscape review of the (in)consistency between RCT and observational study treatment effect estimates. The reasons for the variation in relative treatment effects are not assessed in this review but should be the subject of further study.

Methods

Eligibility criteria

Inclusion criteria

Study design: Published systematic literature reviews designed to compare relative treatment effects from observational studies with the corresponding effects from RCTs; or Published systematic literature reviews that reported subgroup analyses stratified by RCT and observational study design; and Observational studies included in these reviews have to be retrospective or prospective cohort studies, or case-control studies Population: Human subjects Intervention(s) and comparator(s): Any active or placebo-controlled pharmaceutical or biopharmaceutical intervention Outcome(s): Efficacy/effectiveness or safety outcomes Pooled relative treatment effect estimates for both observational studies and RCTs

Exclusion criteria

Systematic reviews that compared absolute outcomes, such as event rates, between non-comparative observational studies and RCTs Non-pharmaceutical-based studies, e.g., surgical procedures, traditional medicine, vitamin/herbal supplements, etc. Non-English language Abstracts or conference proceedings

Search strategy

We searched PubMed and Embase to identify relevant systematic literature reviews published between January 1, 1990, and January 31, 2020. Anglemeyer et al.’s search strategy [1] was used as a template to develop the search strategy, which included a wide range of MeSH terms and relevant keywords. We updated Anglemeyer et al.’s systematic review hedge and used the more recent CADTH systematic review/meta-analysis hedge, created in 2016, in both PubMed and Embase [15]. We restricted our search to focus on pharmaceuticals only. PubMed and Embase were searched for the following concepts: pharmaceuticals, study methodology, and comparisons (filters: Humans and English language). The PubMed search strategy which was adapted for use in Embase can be found in Additional File 1.

Study selection

After removing duplicate references, three authors (JG, YH and LO) screened the titles and abstracts to identify relevant reviews. Once complete, LO verified the screening for accuracy. Following the title and abstract screen, full text articles were obtained for all potentially relevant reviews. Full text articles were then assessed to determine if they meet the selection criteria for final inclusion in the review.

Data extraction

A pilot extraction was first done by two authors (JG and YH) on a sample of three articles using a standardized extraction table. This was done to test the standardized extraction table and to ensure consistency between the authors performing the data extraction. JG and YH then independently extracted information from each review using the standardized extraction table. A third author (LO) verified the extraction for accuracy and identified any discrepancies. These discrepancies were discussed until resolved. We focused on primary outcomes reported in the reviews and extracted information summarizing the scope of each of the identified systematic reviews. Extracted information included the following: review objective, population, disease/therapeutic area, interventions, outcome(s), number of included RCTs and observational studies, pooled relative treatment effect estimates for RCTs and observational studies along with the 95% confidence intervals (95% CI), and measures of heterogeneity.

Analysis

Based on the extracted information, we calculated the ratio of the relative treatment effect estimate from observational studies over the relative treatment effect estimate from RCTs (e.g., RRobs/RRrct), along with the corresponding 95% CI obtained via a Monte Carlo simulation for each pair of pooled RCT and observational study estimates. Outcomes for which the relative treatment effect was not expressed with a relative risk (RR), odds ratio (OR), or hazard ratio (HR) were excluded from our analysis. We expressed differences in pooled effect estimates with the following measures: ratios that were < 1, > 1, or = 1, ratios indicating an “extreme difference” (< 0.70 or > 1.43) [16] and absence of an extreme difference. We evaluated (in)consistency between pooled RCT and observational study estimates with the following measures: presence of opposite direction of effect, RCT effect estimate outside the 95% CI of the observational study estimate, and vice versa, statistically significant difference between RCT and observational study estimates, and statistically significant difference along with opposite direction of effect. Statistically significant difference was determined by examining the 95% CI of the ratio of the relative treatment effect estimates from observational studies and RCTs derived from the Monte Carlo simulation. We examined differences in relative effect measures from observational studies and RCTs by outcome type and therapeutic area. To test the robustness of our findings, we conducted two sensitivity analyses. As some reviews assessed more than one endpoint and contributed more than one pair of pooled relative treatment effects from RCTs and observational studies to our analysis, we repeated the analysis with one endpoint per review, i.e., a single pair of pooled relative treatment effects from RCTs and observational studies from each review, selecting the most frequently used endpoints for inclusion whenever possible. Additionally, as some studies were included in more than one review, we repeated the analysis ensuring that there is no overlap of data between the included reviews, i.e., ensuring that each study was included in only one review included in our analysis. Details on the sensitivity analyses are included in Additional File 2. All analyses were conducted using RStudio, version 1.3.1073 (©2009-2020 RStudio, PBC).

Results

Literature search

Our search on PubMed and Embase yielded 3798 unique citations after removing duplicates. After screening titles and abstracts, we identified 93 full text articles for further review. Of these, 30 reviews met our inclusion criteria (Fig. 1).
Fig. 1

Diagram depicting literature screening process 

Diagram depicting literature screening process

Included systematic reviews

The characteristics of the included reviews and the pairs of pooled relative treatment effects from RCTs and observational studies reported in the reviews are summarized in Table 1. Thirty systematic reviews across 7 therapeutic areas (cardiovascular disease [15/30], infectious disease [6/30], oncology [3/30], mental health [2/30], immune-inflammatory [1/30], metabolic disease [1/30], and other [2/30]) were identified from the literature. These reviews included 519 RCTs and observational studies and provided 79 pairs of pooled relative treatment effects from RCTs and observational studies across multiple interventions, comparators, and outcomes. Five pairs were excluded from our assessment because they concerned continuous outcomes (n = 1) or no pooled effect estimate was reported for observational studies (n = 4). As a result, 74 pairs of pooled relative treatment effects from RCTs and observational studies from 29 reviews were available for assessment of consistency.
Table 1

Characteristics of included reviews

ReviewDiseaseTreatmentComparatorEndpointNumber of RCTsNumber of Observational StudiesRCT pooled effect estimate (95% CI)Observational pooled effect estimate (95% CI)
Abuzaid (2018) [17]Severe aortic stenosisDual anti-platelet therapy (DAPT)Single anti-platelet therapy (SAPT)30-day all-cause mortality37RR 1.2 (0.5–2.89)RR 1.19 (0.74–1.91)
Abuzaid (2018) [17]Severe aortic stenosisDAPTSAPTLongest reported all-cause mortality37RR 1.14 (0.54–2.42)RR 1.06 (0.52–2.18)
Abuzaid (2018) [17]Severe aortic stenosisDAPTSAPTMajor bleeding37RR 1.74 (0.52–5.82)RR 2.23 (1.36–3.65)
Agarwal (2019) [18]Acute coronary syndrome (ACS), coronary artery disease (CAD)Dual therapyTriple therapyMajor bleeding33RR 0.53 (0.38–0.76)RR 0.88 (0.46–1.67)
Agarwal (2018) [19]CADDAPTAspirinPrimary outcome: mid- to long-term (> 30 days) composite of myocardial infarction (MI), stroke, or death84RR 0.43 (0.17–1.11)RR 0.85 (0.72–1.01)
An (2019) [20]Severe aortic stenosisAntiplateletAnticoagulationMortality25RR 0.82 (0.33–2.03)RR 0.47 (0.18–1.22)
An (2019) [20]Severe aortic stenosisAntiplateletAnticoagulationStroke/transient ischemic attack (TIA)25RR 0.9 (0.35–2.33)RR 0.57 (0.31–1.03)
An (2019) [20]Severe aortic stenosisAntiplateletAnticoagulationThromboembolic events25RR 1.13 (0.51–2.49)RR 0.71 (0.38–1.32)
An (2019) [20]Severe aortic stenosisAntiplateletAnticoagulationBleeding25RR 0.34 (0.11–1.04)RR 0.34 (0.2–0.58)
Chien (2020)* [21]Multi-drug resistant gram-negative bacteria (MDR-GNB) infectionsColistinOther antibioticsColistin-associated acute kidney injury (CA-AKI)119OR 2.75 (0.43–17.49)Not reported
Chien (2020) [21]MDR-GNB infectionsColistin monotherapyColistin combination therapyAcute kidney injury (AKI)36OR 1.77 (1.17–2.66)OR 1.15 (0.76–1.76)
Chopra (2012)* [22]PneumoniaStatin therapyNo statin therapyUnadjusted all-cause mortality following an episode of pneumonia19OR 0.84 (0.32–2.18)Not reported
Chopra (2012) [22]PneumoniaStatin therapyNo statin therapyAdjusted all-cause mortality following an episode of pneumonia111OR 0.84 (0.32–2.18)OR 0.66 (0.55–0.79)
Desai (2016) [23]Ankylosing spondylitis, inflammatory bowel diseases, juvenile idiopathic arthritis, plaque psoriasis, psoriatic arthritis, and rheumatoid arthritisAdalimumabEtanerceptDiscontinuation due to adverse events13RR 0.83 (0.39–1.78)Adjusted HR 1.67 (1.26–2.22)
Desai (2016) [23]Ankylosing spondylitis, inflammatory bowel diseases, juvenile idiopathic arthritis, plaque psoriasis, psoriatic arthritis, rheumatoid arthritisAdalimumabInfliximabDiscontinuation due to adverse events15RR 6.17 (0.78–48.71)Adjusted HR 0.57 (0.46–0.7)
Gandhi (2015) [24]Aortic stenosisDAPTMono-antiplatelet therapy (MAPT).Combined end point of 30-day major stroke, spontaneous MI, all-cause mortality, and combined lethal and major bleeding22OR 0.98 (0.46–2.11)OR 3.02 (1.91–4.76)
Ge (2018) [25]Atrial fibrillationNovel oral anticoagulants (NOACs)Vitamin K antagonists (VKAs)Major bleeding events (Fixed effects model)425OR 0.3 (0.14–0.62)OR 0.68 (0.48–0.95)
Ge (2018) [25]Atrial fibrillationNOACsVKAsThromboembolic events (Fixed effects model)425OR 0.14 (0.01–1.3)OR 0.91 (0.49–1.67)
Heffernan (2020) [26]Serious infectionsβ-lactam/aminoglycoside combination therapyβ-lactam monotherapyAll-cause mortality24OR 3.18 (0.79–12.73)OR 0.79 (0.64–0.99)
Ho (2013) [27]Kidney disease (kidney transplant)Once daily tacrolimusTwice daily tacrolimusBiopsy-proven acute rejection (RCT: at 6 months; Observational: mean follow-up ranges from 3 months to 672 months)45RR 1.18 (0.82–1.68)RR 0.83 (0.39–1.78)
Ho (2013) [27]Kidney disease (kidney transplant)Once daily tacrolimusTwice daily tacrolimus

Biopsy-proven acute rejection (RCT: at 12 months

Observational: mean follow-up ranges from 3 months to 672 months)

25RR 1.24 (0.93–1.65)RR 0.83 (0.39–1.78)
Ho (2013) [27]Kidney disease (kidney transplant)

RCT: twice daily tacrolimus

Observational: once daily tacrolimus

RCT: once daily tacrolimus

Observational: twice daily tacrolimus

Patient survival (RCT: at 6 months

Observational: mean follow-up ranges from 3.5 months to 12 months)

22RR 1.03 (1–1.06)RR 1.02 (0.94–1.1)
Ho (2013) [27]Kidney disease (kidney transplant)

RCT: twice daily tacrolimus

Observational: once daily tacrolimus

RCT: once daily tacrolimus

Observational: twice daily tacrolimus

Patient survival (RCT: at 12 months

Observational: mean follow-up ranges from 3.5 months to 12 months)

32RR 0.99 (0.97–1.02)RR 1.02 (0.94–1.1)
Khan (2019) [28]CADProton pump inhibitor (PPI)No PPIAll-cause mortality324RR 1.35 (0.56–3.23)RR 1.25 (1.11–1.41)
Kirson (2013)* [29]SchizophreniaDepot antipsychoticsOral antipsychoticsVaries across studies: hospitalization, relapse, discontinuation58RR 0.89 (0.64–1.22)Not reported
Land (2017) [30]Psychiatric illnessesClozapineControl drugs (other antipsychotics)Hospitalization319RR 0.62 (0.41–0.94)RR 0.75 (0.69–0.81)
Li (2016) [31]DiabetesDipeptidyl peptidase-4 (DPP-4) inhibitors

RCT: control

Observational: sulfonylurea (SU)

Heart failure381OR 0.97 (0.61–1.56)Unadjusted OR 0.88 (0.22–3.48)
Li (2016) [31]DiabetesDPP-4 inhibitors

RCT: control

Observational: SU

Heart failure381OR 0.97 (0.61–1.56)Adjusted HR 1.10 (1.04–1.17)
Li (2016) [31]Diabetes

RCT: DPP-4 inhibitors

Observational: sitagliptin

RCT: control

Observational: SU

Heart failure381OR 0.97 (0.61–1.56)Unadjusted OR 0.39 (0.02–6.26)
Li (2016) [31]Diabetes

RCT: DPP-4 inhibitors

Observational: sitagliptin

RCT: control

Observational: no sitagliptin use

Heart failure381OR 0.97 (0.61–1.56)Adjusted OR 0.75 (0.38–1.46)
Li (2016) [31]DiabetesDPP-4 inhibitors

RCT: control

Observational: active control

Hospital admission for heart failure56OR 1.13 (1.00–1.26)Adjusted OR 0.85 (0.74–0.97)
Li (2016) [31]DiabetesDPP-4 inhibitors

RCT: control

Observational: SU

Hospital admission for heart failure53OR 1.13 (1.00–1.26)Adjusted HR 0.84 (0.74–0.96)
Li (2016) [31]DiabetesDPP-4 inhibitors

RCT: control

Observational: pioglitazone

Hospital admission for heart failure52OR 1.13 (1.00–1.26)Adjusted HR 0.67 (0.57–0.78)
Li (2016) [31]DiabetesDPP-4 inhibitors

RCT: control

Observational: other oral antidiabetics

Hospital admission for heart failure51OR 1.13 (1.00–1.26)Adjusted OR 0.88 (0.63–1.22)
Li (2016) [31]DiabetesDPP-4 inhibitorsControlHospital admission for heart failure51OR 1.13 (1.00–1.26)Adjusted HR 0.58 (0.38–0.88)
Li (2016) [31]Diabetes

RCT: DPP-4 inhibitors

Observational: sitagliptin

RCT: control

Observational: no sitagliptin use

Hospital admission for heart failure52OR 1.13 (1.00–1.26)Adjusted OR 1.41 (0.95–2.09)
Li (2016) [31]Diabetes

RCT: DPP-4 inhibitors

Observational: sitagliptin

RCT: control

Observational: no sitagliptin use

Hospital admission for heart failure51OR 1.13 (1.00–1.26)Adjusted HR 1.21 (1.04–1.42)
Li (2016) [31]Diabetes

RCT: DPP-4 inhibitors

Observational: sitagliptin

RCT: control

Observational: no sitagliptin use

Hospital admission for heart failure51OR 1.13 (1.00–1.26)Adjusted OR 1.84 (1.16–2.92)
Melloni (2015) [32]Unstable angina/non–ST-segment–elevation myocardial infarction (UA/NSTEMI)

RCT: omeprazole

Observational: any PPI

RCT: placebo

Observational: no PPI

Composite ischemic endpoint at ≈ 1 year120HR 0.99 (0.68–1.44)Adjusted HR 1.35 (1.18–1.54)
Melloni (2015) [32]UA/NSTEMI

RCT: omeprazole

Observational: any PPI

RCT: placebo

Observational: no PPI

Nonfatal MI at ≈ 1 year110HR 0.92 (0.44–1.9)HR 1.331 (1.146–1.547)
Miles (2019) [33]Heart failureFurosemideTorsemideAll-cause mortality53OR 1.12 (0.7–1.8)OR 0.97 (0.44–2.13)
Miles (2019) [33]Heart failureFurosemideTorsemideHeart failure readmissions41OR 2.04 (1.16–3.60)OR 2.91 (0.78–10.91)
Miles (2019) [33]Heart failureFurosemideTorsemideNew York Heart Association class improvement72OR 0.91 (0.61–1.35)OR 0.65 (0.50–0.85)
Mongkhon (2019) [34]Atrial fibrillationOACNon-OACRisk of dementia14RR 1.31 (0.79–2.18)RR 0.75 (0.67–0.83)
Mongkhon (2019) [34]Atrial fibrillationVKANon-VKARisk of dementia14RR 1.31 (0.79–2.18)RR 0.71 (0.68–0.74)
Raheja (2018) [35]Aortic stenosisDAPTSAPTAll-cause mortality32RR 1.07 (0.48–2.41)RR 1.34 (0.51–3.48)
Raheja (2018) [35]Aortic stenosisDAPTSAPTStroke or TIA32RR 0.93 (0.28–3.06)RR 1.25 (0.32–4.92)
Raheja (2018) [35]Aortic stenosisDAPTSAPTMI32RR 3.62 (0.60–21.76)RR 1.18 (0.14–9.98)
Raheja (2018) [35]Aortic stenosisDAPTSAPTMajor/life-threatening bleeding33RR 1.75 (0.88–3.50)RR 3.24 (1.82–5.75)
Ramjan (2014) [36]HIVFixed-dose combination (FDC) antiretroviral therapy (ART)Separate tablet regimensVirological suppression42RR 1.04 (0.98–1.10)RR 1.07 (0.97–1.18)
Ramjan (2014) [36]HIVFDC ARTSeparate tablet regimensAdherence to ART52RR 1.1 (0.98–1.22)RR 1.17 (1.07–1.28)
Shi (2014) [37]Liver cancerStatinsPlacebo/non-useLiver cancer111RR 1.06 (0.66–1.71)RR 0.57 (0.50–0.64)
Teo (2014) [38]Acute infectionsProlonged infusion, which was defined as administration of either extended infusion or continuous infusion of beta-lactam antibioticsIdentical beta-lactams that were administered as intermittent boluses (20–60 min infusion) according to the manufacturer’s package insertAll-cause in-hospital mortality109RR 0.83 (0.57–1.21)RR 0.57 (0.43–0.76)
Teo (2014) [38]Acute infectionsProlonged infusion, which was defined as administration of either extended infusion or continuous infusion of beta-lactam antibiotics.Identical beta-lactams that were administered as intermittent boluses (20–60 min infusion) according to the manufacturer’s package insertClinical success (cure or improvement)145RR 1.05 (0.99–1.12)RR 1.34 (1.02–1.76)
Vinceti (2018) [39]CancerHighest selenium exposureLowest selenium exposureTotal (any) cancer incidence37RR 1.01 (0.93–1.10)OR 0.72 (0.55–0.93)
Vinceti (2018) [39]CancerHighest selenium exposureLowest selenium exposureCancer mortality17RR 1.02 (0.80–1.30)OR 0.76 (0.59–0.97)
Vinceti (2018) [39]Colorectal cancerHighest selenium exposureLowest selenium exposureColorectal cancer risk21RR 0.99 (0.69–1.43)OR 0.80 (0.68–0.94)
Vinceti (2018) [39]Lung cancerHighest selenium exposureLowest selenium exposureLung cancer risk25RR 1.16 (0.89–1.50)OR 0.74 (0.43–1.28)
Vinceti (2018) [39]Breast cancerHighest selenium exposureLowest selenium exposureBreast cancer risk18RR 2.04 (0.44–9.55)OR 1.09 (0.87–1.37)
Vinceti (2018) [39]Bladder cancerHighest selenium exposureLowest selenium exposureBladder cancer risk22RR 1.07 (0.76–1.52)OR 0.65 (0.46–0.92)
Vinceti (2018) [39]Prostate cancerHighest selenium exposureLowest selenium exposureProstate cancer risk421RR 1.01 (0.90–1.14)OR 0.84 (0.75–0.95)
Wang (2019) [40]PneumoniaPPINo PPIPneumonia1048OR 1.13 (0.71–1.78)OR 1.45 (1.32–1.59)
Wat (2019) [41]Traumatic brain injury (TBI)Antiepileptic drugsPlacebo/no treatmentEarly seizures after TBI36RR 0.58 (0.20–1.72)RR 0.42 (0.29–0.62)
Wong (2017)* [42]Coronary heart disease/CADMacrolidesPlacebo/no treatmentShort-term primary outcome (defined as cardiac mortality, cardiovascular mortality, sudden death, cardiac arrest, all-cause mortality, or composite outcomes including death and/or other cardiovascular events or procedures)515RR 0.99 (0.74–1.34)Not reported
Wong (2017) [42]Coronary heart disease/CADMacrolidesPlacebo/no treatmentLong term primary outcome (defined as cardiac mortality, cardiovascular mortality, sudden death, cardiac arrest, all-cause mortality, or composite outcomes including death and/or other cardiovascular events or procedures)148RR 1.03 (0.96–1.10)RR 1.05 (0.91–1.22)
Yang (2019)* [43]CancerEpoetin alfa biosimilar drugsEpoetin alfa drugsMean of hemoglobin increase14WMD -0.02 (− 0.38–0.34)WMD 0.07 (− 0.12–0.25)
Yang (2019) [43]CancerEpoetin alfa biosimilar drugsEpoetin alfa drugsHemoglobin response11RR 1.09 (0.86–1.38)RR 1.18 (0.87–1.60)
Yang (2019) [43]Breast cancerGranulocyte colony-stimulating factor (G-CSF) biosimilar drugsG-CSF drugsFebrile neutropenia in cycle 153RR 1.14 (0.80–1.63)RR 1.36 (0.84–2.23)
Yang (2019) [43]NHLG-CSF biosimilar drugsG-CSF drugsFebrile neutropenia in cycle 111RR 0.54 (0.20–1.46)RR 0.87 (0.20–3.85)
Yang (2019) [43]CancerG-CSF biosimilar drugs (filgrastim biosimilars)G-CSF drugsBone pain44RR 0.90 (0.78–1.05)RR 0.86 (0.59–1.24)
Yu (2018) [44]Non-cardiac vascular diseaseStatinsPlacebo/no statin treatmentAll-cause mortality36OR 0.62 (0.41–0.92)OR 0.65 (0.48–0.88)
Yu (2018) [44]Non-cardiac vascular diseaseStatinsPlacebo/no statin treatmentPrimary patency110OR 0.39 (0.09–1.65)OR 0.77 (0.59–0.99)
Yu (2018) [44]Non-cardiac vascular diseaseStatinsPlacebo/no statin treatmentAmputation110OR 0.47 (0.07–2.94)OR 0.64 (0.50–0.83)
Yu (2018) [44]Non-cardiac vascular diseaseStatinsPlacebo/no statin treatmentCardiovascular events32OR 0.55 (0.36–0.83)OR 0.87 (0.16–4.60)
Zhang (2019) [45]Atrial fibrillationNOACNon-NOAC therapyRenal impairment113HR 0.82 (0.71–0.93)HR 0.64 (0.58–0.69)
Zhao (2018) [46]CADDAPTSAPTAny bleeding events58RR 1.25 (0.98–1.59)RR 0.87 (0.76–1.01)
Zhao (2018) [46]CADDAPTSAPTMinor bleeding events43RR 1.15 (0.73–1.81)RR 0.84 (0.37–1.93)
Zhao (2018) [46]CADDAPTSAPTMajor bleeding events56RR 1.28 (0.95–1.71)RR 0.99 (0.66–1.51)
Zhao (2018) [46]CADDAPTSAPTMajor bleeding events during hospitalization (random effects model)33RR 1.27 (0.91–1.78)RR 0.50 (0.12–2.09)

*Not included in the analysis

Characteristics of included reviews Biopsy-proven acute rejection (RCT: at 12 months Observational: mean follow-up ranges from 3 months to 672 months) RCT: twice daily tacrolimus Observational: once daily tacrolimus RCT: once daily tacrolimus Observational: twice daily tacrolimus Patient survival (RCT: at 6 months Observational: mean follow-up ranges from 3.5 months to 12 months) RCT: twice daily tacrolimus Observational: once daily tacrolimus RCT: once daily tacrolimus Observational: twice daily tacrolimus Patient survival (RCT: at 12 months Observational: mean follow-up ranges from 3.5 months to 12 months) RCT: control Observational: sulfonylurea (SU) RCT: control Observational: SU RCT: DPP-4 inhibitors Observational: sitagliptin RCT: control Observational: SU RCT: DPP-4 inhibitors Observational: sitagliptin RCT: control Observational: no sitagliptin use RCT: control Observational: active control RCT: control Observational: SU RCT: control Observational: pioglitazone RCT: control Observational: other oral antidiabetics RCT: DPP-4 inhibitors Observational: sitagliptin RCT: control Observational: no sitagliptin use RCT: DPP-4 inhibitors Observational: sitagliptin RCT: control Observational: no sitagliptin use RCT: DPP-4 inhibitors Observational: sitagliptin RCT: control Observational: no sitagliptin use RCT: omeprazole Observational: any PPI RCT: placebo Observational: no PPI RCT: omeprazole Observational: any PPI RCT: placebo Observational: no PPI *Not included in the analysis

Ratio of relative effect measures from observational studies and RCTs

Figure 2 presents the scatterplot of relative effect measures from observational studies and RCTs across the 74 pairs of pooled relative treatment effects with the 95% CI bars. The ratio of the relative effect measure from observational studies over the corresponding relative effect measure from RCTs ranged from 0.09 to 6.50 (median = 0.92, interquartile range = 0.69–1.27). The ratio was greater than 1, i.e., the relative effect was larger in observational studies in 31 of the 74 pairs (41.9%). The ratio was less than 1, i.e., the relative effect was larger in RCTs in 42 of the 74 pairs (56.8%), and the ratio was equal to 1 in one of the 74 pairs (1.4%). The ratio was greater than 1.43 in 12 of the 74 pairs (16.2%) and less than 0.7 in 20 of the 74 pairs (27.0%) indicating an extreme difference. There was an absence of an extreme difference (0.7 ≤ ratio ≤ 1.43) in 42 of the 74 pairs (56.8%; Table 2). Sensitivity analyses including only one endpoint from each review and ensuring no overlap of data between the included reviews resulted in similar findings (Table 2). Scatterplots of relative effect measures from observational studies and RCTs by outcome type and therapeutic area can be found in Additional File 3: Figures S1 and S2.
Fig. 2

Relative effect measures (RR, OR, HR) from observational studies (y-axis) versus corresponding relative effect measures from randomized controlled trials (x-axis) across 74 pairs of pooled relative treatment effects 

Table 2

Ratio of relative effect measures from observational studies and relative effect measures from RCTs (e.g., RRobs/RRrct): (a) among 74 pairs of pooled estimates, (b) with only one endpoint per review included, and (c) with studies reported in multiple reviews excluded

Full sampleOne endpoint per reviewStudies reported in multiple reviews excluded
Proportion%Proportion%Proportion%
Ratio > 1*a31/7441.912/2941.424/6536.9
Ratio < 1*b42/7456.817/2958.640/6561.5
Ratio = 1*1/741.40/290.01/651.5
Extreme difference (ratio > 1.43)12/7416.25/2917.28/6512.3
Extreme difference (ratio < 0.7)20/7427.08/2927.619/6529.2
Absence of an extreme difference (0.7 ≤ ratio ≤ 1.43)42/7456.816/2955.238/6558.5

*Does not account for direction of effect

Relative effect larger in observational studies

Relative effect larger in RCTs

Relative effect measures (RR, OR, HR) from observational studies (y-axis) versus corresponding relative effect measures from randomized controlled trials (x-axis) across 74 pairs of pooled relative treatment effects Ratio of relative effect measures from observational studies and relative effect measures from RCTs (e.g., RRobs/RRrct): (a) among 74 pairs of pooled estimates, (b) with only one endpoint per review included, and (c) with studies reported in multiple reviews excluded *Does not account for direction of effect Relative effect larger in observational studies Relative effect larger in RCTs

Consistency of relative effect measures from observational studies and RCTs

In 30 of the 74 pairs (40.5%), effect estimates from observational studies and RCTs pointed in opposite directions of effect. The RCT point estimate was outside the 95% CI of the observational study in 35 of the 74 pairs (47.3%) and the observational study point estimate was outside the 95% CI of the RCT in 27 of the 74 pairs (36.5%). There was a statistically significant difference between relative effect estimates from observational studies and RCTs in 15 of the 74 pairs (20.3%). In 13 of the 74 pairs (17.6%), there was a statistically significant difference and the effect estimates of observational studies and RCTs pointed in opposite directions (Table 3). The results remained fairly consistent when the sensitivity analyses were conducted (Table 3).
Table 3

Consistency of relative effect measures from observational studies and relative effect measures from RCTs: (a) among 74 pairs of pooled estimates, (b) with only one endpoint per review included, and (c) with studies reported in multiple reviews excluded

Full sampleOne endpoint per reviewStudies reported in multiple reviews excluded
Proportion%Proportion%Proportion%
Effect estimates of observational studies and RCTs in opposite directions30/7440.511/2937.926/6540.0
RCT effect estimate outside the observational study 95% CI35/7447.317/2958.629/6544.6
Observational effect estimate outside the RCT 95% CI27/7436.511/2937.925/6538.5
Statistically significant difference15/7420.37/2924.114/6521.5
Statistically significant difference and effect estimates of observational studies and RCTs in opposite directions13/7417.66/2920.712/6518.5
Consistency of relative effect measures from observational studies and relative effect measures from RCTs: (a) among 74 pairs of pooled estimates, (b) with only one endpoint per review included, and (c) with studies reported in multiple reviews excluded

Discussion

Our analysis of 29 reviews comparing results of RCTs and observational studies of pharmaceuticals showed, on average, no significant differences in their relative risk ratios across all studies, but also considerable study-by-study variability. The median ratio of the relative effect measure from observational studies to RCTs was 0.92, indicating just slightly lower effectiveness/safety estimates in observational studies than corresponding RCTs. This is in fact somewhat higher than the 0.80 ratio recently found in meta-research comparing effect estimates of randomized clinical trials that use routinely collected data (i.e., from traditional observational study sources such as registries, electronic health records, or administrative claims) for outcome ascertainment with traditional trials not using routinely collected data [47]. However, whether judging by the frequency of “extreme” differences (43.2%) or statistically significant differences in opposite directions (17.6%), one could not claim that observational study results consistently replicated RCT results on a study-by-study basis in our sample. There are a number of reasons that any given observational study result may not replicate an RCT comparing the same treatments. First, it may not have been the intent of the observational study researchers to match a specific clinical trial—they may have intentionally studied a different treatment population, setting, or protocol in order to complement or test the RCT findings. In such cases, there would be variation in effect estimates due to estimating a different causal effect. Even if the researcher does attempt to match a specific RCT, the data may not have been available to closely match it, since patient histories, test results, etc., used for RCT inclusion criteria may not be observed, or outcomes may not be captured the same way. Even given similar data, non-randomized studies have the potential for selection/channeling bias into treatment determined by factors unobservable in either type of study, and analytic attempts to correct for such confounding may have limited success. In some cases, treatment conditions may differ enough between the RCT and real-world practice that replication of results should not be expected, e.g., due to careful safety monitoring that affects subsequent treatment in RCTs. Finally, it is possible that other pharmacoepidemiologic principles, beyond the study design considerations we already mentioned, were violated in the individual RWD studies, which could have caused disagreement between their results and the RCTs. While variation in treatment effect estimates due to estimating a different causal effect in a different study population is expected and valid, biased estimates arising from issues with study design or analytical methods may be problematic. Details in these reviews were typically insufficient to distinguish among these possible explanations, without detailed review of the individual studies, which we did not attempt here. However, some reviews did attempt to explain the differences they found. For example, in the review by Gandhi et al. (2015) [24], which compared dual-antiplatelet therapy (DAPT) to mono-antiplatelet therapy (MAPT) following transcatheter aortic valve implantation, there was a statistically significant difference in pooled relative treatment effect estimates from observational studies and RCTs. The primary outcome was more likely to occur in the DAPT group than in the MAPT group in the observational studies (OR 3.02; 95% CI 1.91–4.76); however, no statistically significant difference was found between DAPT and MAPT in the RCTs (OR 0.98; 95% CI 0.46–2.11). The authors explained that the RCTs (n = 2) and observational studies (n = 2) included in this review had variable patient inclusion/exclusion criteria and there were differences in the type of prosthetic aortic valve used, which may have introduced selection bias [24]. To allow for better use of individual observational studies to inform decision-making, their ability to replicate RCT results needs to become more reliable, and the “target trial” approach seems to be a path forward. Several systematic efforts using sophisticated observational data research designs to emulate multiple RCTs are underway [48, 49]. These efforts are intended to provide regulatory bodies and other decision makers with empirical evidence to support the development of a framework for assessing when and under what circumstances observational RWE can be used to support a wider range of regulatory decisions. RCT DUPLICATE is a collaboration between the Food and Drug Administration (FDA), Brigham and Women’s Hospital and Harvard Medical School Division of Pharmacoepidemiology, to replicate 30 completed Phase III or IV trials and to predict the results of seven ongoing Phase IV trials using Medicare and commercial claims data [50]. The RCT DUPLICATE team has recently reported results for its first 10 trials [51]. They report hazard ratio estimates within the 95% CI of the corresponding trial for 8 of 10 emulations. The Multi-Regional Clinical Trials Center and OptumLabs are leading another effort called Observational Patient Evidence for Regulatory Approval and Understanding Disease (OPERAND) which extends the trial emulation activity and relaxes the inclusion/exclusion criteria of the trials to examine treatment effects in the broader patient population treated in routine care [52]. The FDA has also funded the Yale University-Mayo Clinic Center of Excellence in Regulatory Science and Innovation to predict the results of three to four ongoing safety trials using OptumLabs claims data [53]. It is important to understand that clinical trials emulation efforts are being conducted solely to improve understanding of when observational studies may be expected to produce robust results. Bartlett and colleagues [14] found that in a review of 220 clinical trials published in high impact medical journals in 2017, 15% could potentially be emulated using data available from medical claims or EHRs. For example, the inclusion/exclusion criteria for many oncology trials require data on genetic markers and progression free survival unavailable in EHRs. The estimate by Bartlett and colleagues may prove to be an underestimate as the ability to link different types of observational data continues to improve. Nevertheless, it is reasonable to assume that it is not possible to emulate most trials with existing observational datasets. These efforts are critical to advance our understanding of the strengths and limitations of observational RWE, identifying issues with study design, endpoint definition, data quality, and analytical methodology that may impact the consistency of findings between RWE and RCTs. While much attention has focused on differences in study populations between observational studies and RCTs as the reason for the inconsistency in effect estimates, emerging evidence suggests that issues with study design (e.g., establishing time zero of exposure) may be equally if not more important [7]. Therefore, the results of these efforts will not provide definitive guidance to decision makers but they emphasize how even subtle differences in study design and endpoint definition can impact absolute estimates of treatment effect. Moreover, RWE studies are answering a different question than RCTs, i.e., “Does it work?” verses “Can it Work?” The former is important to a variety of stakeholders beyond regulators. Hence, they should not be expected to provide results identical to RCTs.

Conclusions

In conclusion, although our review shows no average significant difference in the relative risk ratios between published RCTs and observational studies, there is substantial study-to-study variation. It was impractical to review all individual observational study designs and examine their potential biases, but future work should elucidate how much of the variation is due to differences in study populations versus biased estimates arising from issues with study design or analytical methods. As more target trial replication attempts are conducted and published, more systematic evidence will emerge on the reliability of this approach and on the potential for observational studies to more routinely inform healthcare decisions. Additional File 1:. Search Strategy Additional File 2:. Sensitivity Analyses Additional File 3:. Figures S1 & S2. Figure S1. Relative effect measures from observational studies versus corresponding relative effect measures from randomized controlled trials by outcome type. Figure S2. Relative effect measures from observational studies versus corresponding relative effect measures from randomized controlled trials by therapeutic area.
  48 in total

1.  Replication of Randomized, Controlled Trials Using Real-World Data: What Could Go Wrong?

Authors:  David Thompson
Journal:  Value Health       Date:  2020-11-20       Impact factor: 5.725

2.  Using Design Thinking to Differentiate Useful From Misleading Evidence in Observational Research.

Authors:  Steven N Goodman; Sebastian Schneeweiss; Michael Baiocchi
Journal:  JAMA       Date:  2017-02-21       Impact factor: 56.272

Review 3.  Prolonged infusion versus intermittent boluses of β-lactam antibiotics for treatment of acute infections: a meta-analysis.

Authors:  Jocelyn Teo; Yixin Liew; Winnie Lee; Andrea Lay-Hoon Kwa
Journal:  Int J Antimicrob Agents       Date:  2014-03-01       Impact factor: 5.283

Review 4.  Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials.

Authors:  Andrew Anglemyer; Hacsi T Horvath; Lisa Bero
Journal:  Cochrane Database Syst Rev       Date:  2014-04-29

Review 5.  Benchmarking Observational Analyses Against Randomized Trials: a Review of Studies Assessing Propensity Score Methods.

Authors:  Shaun P Forbes; Issa J Dahabreh
Journal:  J Gen Intern Med       Date:  2020-03-19       Impact factor: 5.128

6.  When and How Can Real World Data Analyses Substitute for Randomized Controlled Trials?

Authors:  Jessica M Franklin; Sebastian Schneeweiss
Journal:  Clin Pharmacol Ther       Date:  2017-09-25       Impact factor: 6.875

Review 7.  Efficacy and Safety of Supportive Care Biosimilars Among Cancer Patients: A Systematic Review and Meta-Analysis.

Authors:  Jichun Yang; Shuqing Yu; Zhirong Yang; Yusong Yan; Yao Chen; Hongmei Zeng; Fei Ma; Yanxia Shi; Yehui Shi; Zilu Zhang; Feng Sun
Journal:  BioDrugs       Date:  2019-08       Impact factor: 5.807

8.  Atrial fibrillation ablation in practice: assessing CABANA generalizability.

Authors:  Peter A Noseworthy; Bernard J Gersh; David M Kent; Jonathan P Piccini; Douglas L Packer; Nilay D Shah; Xiaoxi Yao
Journal:  Eur Heart J       Date:  2019-04-21       Impact factor: 29.983

Review 9.  Managing Cardiovascular Risk of Macrolides: Systematic Review and Meta-Analysis.

Authors:  Angel Y S Wong; Esther W Chan; Shweta Anand; Alan J Worsley; Ian C K Wong
Journal:  Drug Saf       Date:  2017-08       Impact factor: 5.606

Review 10.  Comparison of Dual-antiplatelet Therapy to Mono-antiplatelet Therapy After Transcatheter Aortic Valve Implantation: Systematic Review and Meta-analysis.

Authors:  Sumeet Gandhi; Jon-David R Schwalm; James L Velianou; Madhu K Natarajan; Michael E Farkouh
Journal:  Can J Cardiol       Date:  2015-01-24       Impact factor: 5.223

View more
  1 in total

1.  Evaluating agreement between bodies of evidence from randomized controlled trials and cohort studies in medical research: a meta-epidemiological study.

Authors:  Nils Bröckelmann; Sara Balduzzi; Louisa Harms; Jessica Beyerbach; Maria Petropoulou; Charlotte Kubiak; Martin Wolkewitz; Joerg J Meerpohl; Lukas Schwingshackl
Journal:  BMC Med       Date:  2022-05-11       Impact factor: 11.150

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.