Home » Reading clubs » Classic » Summary: Ioannidis (2005) Why Most Published Research Findings Are False

Summary: Ioannidis (2005) Why Most Published Research Findings Are False

[written by Mathias C. Cronjäger]

Summary of our discussion

Compared to most other papers we have read over the course of running this reading group, Ioannidis (2005) is rather recent. In light of its large impact (it has been the most downloaded paper from PLOS Medicine) we are however comfortable referring to it as a modern classic.

It is a short and very well written paper, which does not presume technical expertise on the part of the reader: anyone familiar with basic statistics and manipulation of fractions will be able to follow the technical arguments made. It contains important insights regarding how sceptical one should be that a result reported as statistically significant indeed represents a “true” effect.

Since the basic arguments made by Ioannidis are only a slight variation on basic statistical arguments, the fact that this paper made such a large impact in other fields (such as medicine and psychology) reflects rather poorly on how well the statistical community has managed to communicate with people outside our own field.

Summary of the paper itself

The arguments in the paper revolve around computing the positive predictive value/PPV (the rate of “true” effects being detected by studies reporting positive results relative to the total number of positive results reported), given different values of  the following parameters:

  • R – the rate of “true” effects being tested relative to non-effects being tested. From a Bayesian perspective, this corresponds to the prior probability of an effect being present.
  • α – the rate of type I error. This corresponds to the probability that an individual experiment will have a statistically significant outcome in spite of no true effect being present.
  • β – the rate of type II error: This corresponds to the probability that an experiment will fail to detect that a  true effect which is present and instead yield a statistically insignificant outcome.

These three parameters are standard in the theory of the PPV. Ioannidis introduces a forth parameter to account for bias not accounted for in the above:

  • u – the probability that a false effect tested gets reported as a positive result, even though it would under ideal conditions have been statistically insignificant.

This fudge factor can incorporate anything from badly designed experiments or researchers being less sceptical of positive results to post-hoc change of study design, p-hacking or even outright fraud. Ioannidis does not go into addressing how likely any of these factors are to contribute to u, but contends himself with re-deriving an expression for the PPV if some amount of bias is taken into account.

The author then considers the effect that multiple groups investigating the same effect independently of one another will have: if just one group has a statistically significant result this is likely to get published even if the negative results of other groups is not. This means that for “hot” topics (which are subject to a great number of parallel experiments) we should be even more weary of single studies reporting statistically significant effects.

Based on his mathematical arguments, Ioannidis then proceeds to give a list of six corollaries, all of which are again reasonably well known to statisticians and most practising scientists (such as “smaller studies tend to have a lower PPV all other factors being equal” or “Greater flexibility in study design and how to measure outcomes leads to a lower PPV”).

In his discussion Ioannidis supports the polemical title of the paper by arguing that even for conservatively chosen values of R, α, β, and u, we would expect a PPV below 50%. Finally he gives an overview of how the state of affairs might be improved. Here his prescriptions are similar to what other statisticians and researches have argued for such as Increasing transparency, (pre-registration of trials; making raw data and code used in analysis available) and encouraging the publication of negative results.

The author concludes by suggesting that it is his personal belief that a number of “classics” in various fields would not hold up if replications thereof were attempted. Given the results of later replication results (such as the Open Science Collaboration 2015 replication attempts of 100 famous results in psychology in the references), this seems prescient.

References

The paper itself:

Ioannidis, J.P.A., 2005. Why Most Published Research Findings Are False. PLoS Medicine, 2(8), p.e124. Available at: http://dx.plos.org/10.1371/journal.pmed.0020124.

Later papers expanding on the topic

Colquhoun, D., 2014. An investigation of the false discovery rate and the misinterpretation of P values. Royal Society Open Science, pp.1–15. Available at: http://rsos.royalsocietypublishing.org/content/1/3/140216.

Jager, L.R. & Leek, J.T., 2014. An estimate of the science-wise false discovery rate and application to the top medical literature. Biostatistics (Oxford, England), 15(1), pp.1–12. Available at: http://biostatistics.oxfordjournals.org/content/15/1/1.

Leek, J.T. & Jager, L.R., 2016. Is most published research really false?, Available at: http://biorxiv.org/lookup/doi/10.1101/050575.

A famous replication study in Psychology

Open Science Collaboration, 2015. Estimating the reproducibility of psychological science. Science, 349(6251), p.aac4716-aac4716. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26315443.

 


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: