Research Article: Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis

Date Published: April 12, 2019

Publisher: Public Library of Science

Author(s): Robbie C. M. van Aert, Jelte M. Wicherts, Marcel A. L. M. van Assen, Malcolm R. Macleod.

http://doi.org/10.1371/journal.pone.0215052

Abstract

Publication bias is a substantial problem for the credibility of research in general and of meta-analyses in particular, as it yields overestimated effects and may suggest the existence of non-existing effects. Although there is consensus that publication bias exists, how strongly it affects different scientific literatures is currently less well-known. We examined evidence of publication bias in a large-scale data set of primary studies that were included in 83 meta-analyses published in Psychological Bulletin (representing meta-analyses from psychology) and 499 systematic reviews from the Cochrane Database of Systematic Reviews (CDSR; representing meta-analyses from medicine). Publication bias was assessed on all homogeneous subsets (3.8% of all subsets of meta-analyses published in Psychological Bulletin) of primary studies included in meta-analyses, because publication bias methods do not have good statistical properties if the true effect size is heterogeneous. Publication bias tests did not reveal evidence for bias in the homogeneous subsets. Overestimation was minimal but statistically significant, providing evidence of publication bias that appeared to be similar in both fields. However, a Monte-Carlo simulation study revealed that the creation of homogeneous subsets resulted in challenging conditions for publication bias methods since the number of effect sizes in a subset was rather small (median number of effect sizes equaled 6). Our findings are in line with, in its most extreme case, publication bias ranging from no bias until only 5% statistically nonsignificant effect sizes being published. These and other findings, in combination with the small percentages of statistically significant primary effect sizes (28.9% and 18.9% for subsets published in Psychological Bulletin and CDSR), led to the conclusion that evidence for publication bias in the studied homogeneous subsets is weak, but suggestive of mild publication bias in both psychology and medicine.

Partial Text

Meta-analysis is the standard technique for synthesizing different studies on the same topic, and is defined as “the statistical analysis of a large collection of analysis results from individual studies for the purpose of integrating the findings” [1]. One of the greatest threats to the validity of meta-analytic results is publication bias, meaning that the publication of studies depends on the direction and statistical significance of the results [2]. Publication bias generally leads to effect sizes being overestimated and the dissemination of false-positive results (e.g., [3, 4]). Hence, publication bias results in false impressions about the magnitude and existence of an effect [5] and is considered one of the key problems in contemporary science [6].

Methods for examining publication bias can be divided into two groups: methods that assess or test the presence of publication bias, and methods that estimate effect sizes corrected for publication bias. Methods that correct effect sizes for publication bias usually also provide a confidence interval and test the null hypothesis of no effect corrected for publication bias. Table 1 summarizes the methods together with their characteristics and recommendations on when to use each method. The last column of the table lists whether the method is included in our analyses. Readers that are not interested in the details regarding the publication bias methods can focus on the summary in Table 1.

Publication bias is a major threat to the validity of meta-analyses. It results in overestimated effect sizes in primary studies which in turn also biases the meta-analytic results (e.g., [3, 5]). Indications for the presence of publication bias have been observed in many research fields (e.g., [7, 8, 11, 14]), and different methods were developed to examine publication bias in a meta-analysis (for an overview see [2]). We studied the prevalence of publication bias and the overestimation caused by it in a large number of meta-analyses published in Psychological Bulletin and CDSR by applying publication bias methods to homogeneous subsets of these meta-analyses. Homogeneous subsets were created, because publication bias methods have poor statistical properties if the true effect size is heterogeneous [5, 32, 50, 52]. The prevalence of publication bias was studied by means of Egger’s test [43], the rank-correlation test [47], TES [50], and p-uniform’s publication bias test [5]. We used p-uniform and a meta-analysis based on the 10% most precise effect size estimates of a meta-analysis to estimate the effect size corrected for publication bias. The statistical properties of our preregistered analyses were also examined by means of a Monte-Carlo simulation study. Our paper is different from previous work [15–21] that studied the presence of questionable research practices and publication bias based on the distribution of p-values, because we did not analyze the distribution of p-values of studies published in a whole research field.

 

Source:

http://doi.org/10.1371/journal.pone.0215052

 

Leave a Reply

Your email address will not be published.