Research Article: Meta-analyses of Adverse Effects Data Derived from Randomised Controlled Trials as Compared to Observational Studies: Methodological Overview

Date Published: May 3, 2011

Publisher: Public Library of Science

Author(s): Su Golder, Yoon K. Loke, Martin Bland, Jan P. Vandenbroucke

Abstract: Su Golder and colleagues carry out an overview of meta-analyses to assess whether
estimates of the risk of harm outcomes differ between randomized trials and
observational studies. They find that, on average, there is no difference in the
estimates of risk between overviews of observational studies and overviews of
randomized trials.

Partial Text: There is considerable debate regarding the relative utility of different study
designs in generating reliable quantitative estimates for the risk of adverse
effects. A diverse range of study designs encompassing randomised controlled trials
(RCTs) and non-randomised studies (such as cohort or case-control studies) may
potentially record adverse effects of interventions and provide useful data for
systematic reviews and meta-analyses [1],[2]. However, there are strengths and weaknesses inherent to
each study design, and different estimates and inferences about adverse effects may
arise depending on study type [3].

Our analyses found little evidence of systematic differences in adverse effect
estimates obtained from meta-analysis of RCTs and from meta-analysis of
observational studies. Figure 3
shows that discrepancies may arise not just from differences in study design or
systematic bias, but possibly because of the random variation, fluctuations or
noise, and imprecision in attempting to derive estimates of rare events. There was
less discrepancy between the study designs in meta-analyses that generated more
precise estimates from larger studies, either because of better quality, or because
the populations were more similar (perhaps because large, long-term RCTs capture a
broad population similar to observational studies). Indeed, the adverse effects with
discrepant results between RCTs and observational studies were distributed
symmetrically to the right and left of the line of no difference, meaning that
neither study design consistently over- or underestimates risk of harm as compared
to the other. It is likely that other important factors such as population and
delivery of intervention are at play here—for instance, the major discrepancy
identified in Col et al. [119] for HRT and breast cancer is already well documented.
This discrepancy has also been explained by the timing of the start of treatment
relative to menopause, which was different between trials and observational studies.
After adjustment, the results from the different study designs have been found to no
longer differ [132],[133].

Source:

http://doi.org/10.1371/journal.pmed.1001026

 

Leave a Reply

Your email address will not be published.