**Date Published:** February 27, 2007

**Publisher:** Public Library of Science

**Author(s):** Ramal Moonesinghe, Muin J Khoury, A. Cecile J. W Janssens

**Abstract: While the authors agree with John Ioannidis that “most research findings are false,” here they show that replication of research findings enhances the positive predictive value of research findings being true.**

**Partial Text:** We know there is a lot of lack of replication in research findings, most notably in the field of genetic associations [1–3]. For example, a survey of 600 positive associations between gene variants and common diseases showed that out of 166 reported associations studied three or more times, only six were replicated consistently [4]. Lack of replication results from a number of factors such as publication bias, selection bias, Type I errors, population stratification (the mixture of individuals from heterogeneous genetic backgrounds), and lack of statistical power [5].

We examine the PPV as a function of the number of statistically significant findings. Figure 1 shows the PPV of at least one, two, or three statistically significant research findings out of ten independent studies as a function of the pre-study odds of a true relationship (R) for powers of 20% and 80%. The lower lines correspond to Ioannidis’ finding and indicate the probability of a true association when at least one out of ten studies shows a statistically significant result. As can be seen, the PPV is substantially higher when more research findings are statistically significant. Thus, a few positive replications can considerably enhance our confidence that the research findings reflect a true relationship. When R ranged from 0.0001 to 0.01, a higher number of positive studies is required to attain a reasonable PPV. The difference in PPV for power of 80% and power of 20% when at least three studies are positive is higher than when at least one study is positive. Figure 2 gives the PPV for increasing number of positive studies out of ten, 25, and 50 studies for pre-study odds of 0.0001, 0.01, 0.1, and 0.5 for powers of 20% and 80%. When there is at least one positive study (r = 1) and power equal to 80%, as indicated in Ioannidis’ paper, PPV declined approximately 50% for 50 studies compared to ten studies for R values between 0.0001 and 0.1. However, PPV increases with increasing number of positive studies and the percentage of positive studies required to achieve a given PPV declines with increasing number of studies. The number of positive studies required to achieve a PPV of at least 70% increased from eight for ten studies to 12 for 50 studies when pre-study odds equaled 0.0001, from five for ten studies to eight for 50 studies when pre-study odds equaled 0.01, from three for ten studies to six for 50 studies when pre-study odds equaled 0.1, and from two for ten studies to five for 50 studies when pre-study odds equaled 0.5. The difference in PPV for powers of 80% and 20% declines with increasing number of studies.

Although the PPV increases with increasing statistically significant results, the probability of obtaining at least r significant results declines with increasing r. This probability and the corresponding PPV for pre-study odds of 0.0001, 0.01, 0.1, and 0.5 are given for ten studies in Table 2. When power is 20% and pre-study odds are 0.0001, the probability of obtaining at least three statistically significant results is 1% and the corresponding PPV is 0.3%. This probability and the corresponding PPV increase with increasing pre-study odds. For example, when R = 0.1, the probability of obtaining at least three significant results is 4% and the PPV is 74%. As expected, both the probability of obtaining statistically significant results and the corresponding PPV increase with increasing power. However, for very small R values (around 0.0001), the increase in power has a minimal impact in the probability of obtaining at least one, two, or three statistically significant results. When power is 80%, the probability of obtaining at least three statistically significant results is 1.2% and the corresponding PPV is 0.9% for R = 0.0001, and when pre-study odds are 0.1, the probability of obtaining at least three statistically significant results increases to 10% and the corresponding PPV to 90%.

The importance of research replication was discussed in a Nature Genetics editorial in 1999 lamenting the nonreplication of association studies [8]. The editor emphasized that when authors submit manuscripts reporting genetic associations, the study should include an effect size and it should contain either a replication in an independent sample or physiologically meaningful data supporting a functional role of the polymorphism in question. While we acknowledge that our assumptions of identical design, power, and level of significance reflect a somewhat simplified scenario of replication, we quantified the positive predictive value of true research findings for increasing numbers of significant results. True replication, however, requires a precise process where the exact same finding is reexamined in the same way. More often than not, genuine replication is not done, and what we end up with in the literature is corroboration or indirect supporting evidence. While this may be acceptable to some extent in any scientific enterprise, the distance from this to data dredging, moving the goal post, and other selective reporting biases is often very small and can contribute to “pseudo” replication.

Source:

http://doi.org/10.1371/journal.pmed.0040028