Home » Reading clubs » Classic

Category Archives: Classic

Limits of Complexity

[written by William K. Larsen]

We read two papers. Both were review articles from the journal Nature.

Summary of: Ultimate physical limits to computation(2000), by Seth Lloyd

The article is written for the general scientifically literate audience and is mostly easily digestible. A description of the concept of an AND-gate is given, which is rather elementary.The paper covers advanced topics as well.

Lloyd defines the Ultimate physical laptop as a “computer” with mass of 1 kg and volume of 1 liter.

2 main arguments are made concerning the ultimate physical laptop:

  1. Energy limits the speed of computation – which leads to a limit of approximately 1050 computations pr. second (1041 GHz).
  2. Entropy limits memory space – which leads to a limit of approximately 1031 bits (1021 GB)

Overall, the paper makes you think of a computer as a physical object with energy, mass and volume, rather than the idealized model often used in computer science. It gives a taste of a number physical concepts and ideas along the way.

Our discussion:

The limits presented in the paper are very large upper bounds. One could theoretically fill an empty jar with colored sand and consider the very large number of possible configurations as being representative of the information stored. The problem is that the jar of sand cannot be utilized for work in the same way that a computer can.

This paper was written before the death of Moore’s law and thus the limits may have seemed more like a important perspective than a fun thought experiment.

 

Brief summary of: Limits on fundamental limits to computation(2014), by Igor L. Markov

The aim of this paper is to get a perspective on the different possible barriers for the development of computers.

The other paper considered the problem of Fundamental limits to computation by looking at Energy, Space and Information.

This corresponds to only 3 of 25 cells in the following table from the paper:

limits of computation.PNG

Markov considers different cells in this table and makes some concluding remarks.

Our discussion of the paper:

The paper is really hard on the quantum computer – claiming that quantum computers can only find use when simulating quantum-chemical phenomena and that this is also uncertain.

Markov generally does not go into too much depth with anything, but the references seem very useful. Especially 8 of them have been highlighted and given a small description (out of 99)

Limits of computation

Thursday February 16th 10AM we will discuss:

http://www.nature.com/nature/journal/v512/n7513/full/nature13570.html

and

http://www.nature.com/nature/journal/v406/n6799/full/4061047a0.html

Jotun will join per Skype – at least Soren and William will be in Department of Statistics, Oxford, room LG.05.

Summary: Ioannidis (2005) Why Most Published Research Findings Are False

[written by Mathias C. Cronjäger]

Summary of our discussion

Compared to most other papers we have read over the course of running this reading group, Ioannidis (2005) is rather recent. In light of its large impact (it has been the most downloaded paper from PLOS Medicine) we are however comfortable referring to it as a modern classic.

It is a short and very well written paper, which does not presume technical expertise on the part of the reader: anyone familiar with basic statistics and manipulation of fractions will be able to follow the technical arguments made. It contains important insights regarding how sceptical one should be that a result reported as statistically significant indeed represents a “true” effect.

Since the basic arguments made by Ioannidis are only a slight variation on basic statistical arguments, the fact that this paper made such a large impact in other fields (such as medicine and psychology) reflects rather poorly on how well the statistical community has managed to communicate with people outside our own field.

Summary of the paper itself

The arguments in the paper revolve around computing the positive predictive value/PPV (the rate of “true” effects being detected by studies reporting positive results relative to the total number of positive results reported), given different values of  the following parameters:

  • R – the rate of “true” effects being tested relative to non-effects being tested. From a Bayesian perspective, this corresponds to the prior probability of an effect being present.
  • α – the rate of type I error. This corresponds to the probability that an individual experiment will have a statistically significant outcome in spite of no true effect being present.
  • β – the rate of type II error: This corresponds to the probability that an experiment will fail to detect that a  true effect which is present and instead yield a statistically insignificant outcome.

These three parameters are standard in the theory of the PPV. Ioannidis introduces a forth parameter to account for bias not accounted for in the above:

  • u – the probability that a false effect tested gets reported as a positive result, even though it would under ideal conditions have been statistically insignificant.

This fudge factor can incorporate anything from badly designed experiments or researchers being less sceptical of positive results to post-hoc change of study design, p-hacking or even outright fraud. Ioannidis does not go into addressing how likely any of these factors are to contribute to u, but contends himself with re-deriving an expression for the PPV if some amount of bias is taken into account.

The author then considers the effect that multiple groups investigating the same effect independently of one another will have: if just one group has a statistically significant result this is likely to get published even if the negative results of other groups is not. This means that for “hot” topics (which are subject to a great number of parallel experiments) we should be even more weary of single studies reporting statistically significant effects.

Based on his mathematical arguments, Ioannidis then proceeds to give a list of six corollaries, all of which are again reasonably well known to statisticians and most practising scientists (such as “smaller studies tend to have a lower PPV all other factors being equal” or “Greater flexibility in study design and how to measure outcomes leads to a lower PPV”).

In his discussion Ioannidis supports the polemical title of the paper by arguing that even for conservatively chosen values of R, α, β, and u, we would expect a PPV below 50%. Finally he gives an overview of how the state of affairs might be improved. Here his prescriptions are similar to what other statisticians and researches have argued for such as Increasing transparency, (pre-registration of trials; making raw data and code used in analysis available) and encouraging the publication of negative results.

The author concludes by suggesting that it is his personal belief that a number of “classics” in various fields would not hold up if replications thereof were attempted. Given the results of later replication results (such as the Open Science Collaboration 2015 replication attempts of 100 famous results in psychology in the references), this seems prescient.

References

The paper itself:

Ioannidis, J.P.A., 2005. Why Most Published Research Findings Are False. PLoS Medicine, 2(8), p.e124. Available at: http://dx.plos.org/10.1371/journal.pmed.0020124.

Later papers expanding on the topic

Colquhoun, D., 2014. An investigation of the false discovery rate and the misinterpretation of P values. Royal Society Open Science, pp.1–15. Available at: http://rsos.royalsocietypublishing.org/content/1/3/140216.

Jager, L.R. & Leek, J.T., 2014. An estimate of the science-wise false discovery rate and application to the top medical literature. Biostatistics (Oxford, England), 15(1), pp.1–12. Available at: http://biostatistics.oxfordjournals.org/content/15/1/1.

Leek, J.T. & Jager, L.R., 2016. Is most published research really false?, Available at: http://biorxiv.org/lookup/doi/10.1101/050575.

A famous replication study in Psychology

Open Science Collaboration, 2015. Estimating the reproducibility of psychological science. Science, 349(6251), p.aac4716-aac4716. Available at: http://www.ncbi.nlm.nih.gov/pubmed/26315443.

 

Next Wednesday: Why Most Published Research Findings Are False

Next meeting of the Classic papers discussion group will take place next Wednesday (1 Feb. 2017) at 9:30 in the Department of Statistics room LG.05. We will be discussing a modern classic: Why Most Published Research Findings Are False by John P. A. Ioannidis. All participants are welcome (just talk to the receptionist when you arrive if you don’t have building access, she should let you in)

Link to the paper: http://dx.doi.org/10.1371/journal.pmed.0020124

Since its publication, the paper has lead to widespread debate within science about the extent of the problem of irreproducibility: how widespread it is, and what to do about it. Given the central importance of this paper within those debates, as well as the various reproducibility-studies it has spawned, we deem this paper well worth reading, and hope that many people will show up to discuss it with us.

About the classic papers club

The aim of this club is to read the papers that everyone keeps citing but which few people have read. We plan to read a paper every third week for the next 20 years.

We tend to meet Wednesday mornings, and tend to announce the papers we will read ahead of time. Everyone is welcome: if a paper sounds interesting to you, please come by.

This reading group used to be organised on facebook. The old page can be found here.

(more…)

Summing up 10 classic papers on Computation

For about 3 years we have every 3rd week read a classic paper in physics, statistics, probability theory, information theory, game theory and computation. So far we have gone through about 40-50 papers and the papers have generally been superb.

Søren Riis (Queen Mary, London) and Jotun Hein gave a talk to summarize 10 of the papers that fall within computation/complexity theory. The papers are Babbage, Turing, McCullough/Pitt, von Neumann, Cook, Karp, Smale, Shor, Impagliazzo, Papadimitriou.

The slides of his presentation are available here

Summary: McCullough and Pitts (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity

This a most peculiar paper that must be the initial Neural Networks paper and thus is extremely famous.

The paper is not easy to read due to a lot of terms taking from a 1938 textbook by Rudolf Carnap and I (Jotun) could well have missed some finer points, but I think I got the the basic ideas: We have discrete time, only boolean values, and the fundamental unit is the neuron and has input from many neuron and all that matters is how many 0s and 1s it receives (not their order). From this networks evaluating most logical expression can be designed. One can add extra “carrier neurons” that copies the content of a neuron and effectively allows a neuron to receive input from another neuron several time steps back. Cycles needs special treatment.

There is no Hebbian (1949) learning in this paper, so the parameters of each neuron is preset.

The paper is motivated by the brain but the M-P states that model should not be interpreted to literally as representing how the brain works. Nevertheless they end the paper with a series of very specific hopes for this model lile (131):
“To psychology, however defined, the net would contribute all that could be achieved in that field – even if the fields were pushed to ultimate psychic units or ‘psychons,’ for a psychon can be no less than the activity of a single neuron. Since that activity is inherently propositional, all psychic events have an intentional, or semiotic character. The ‘all-or-none’ character of these activities, and the conformity of their relations to those of the logic of propositions, insure that the relations of those of the logic of propositions insure thtat the relations of psychins are those of the the two-valued logic of propositions. Thus in psychology, introspective, behavioristic and physiological, the fundamental relations are those of two-value logic”

Thus M-P is quite confident that their model represent some sort physical reality of the brain. They also mention tinitus, paraesthesias, hallucinations, delusions and disorientations (p131) so they are keen to move on the applicability of their network theory.

I find it fascinating to read the papers from 30s, 40s and 50s by von Neumann, Wiener, MP and their wild ambition and optimism stemming from the belief that a biological machine, such as the brain, is just the right kind of wired system that can be exactly modelled. It has proven a lot harder and it is still an open question what a biological machine is and how detailed physics needs to be retained to for instance modeling the neurone. Ganti (1971) used the term Soft Machine that is a nice metaphor, but seems to have contributed little.