Home » Reading clubs » Classic » Summary: Papadimitriou (2014) Algorithms, complexity, and the sciences

Summary: Papadimitriou (2014) Algorithms, complexity, and the sciences

[Summary by Jotun Hein]

On Wednesday February 10th 10AM we discussed Algorithms, complexity, and the sciences by Christos Papadimitriou published in PNAS 2014. (available here)

This is in principle an outlier, since you can’t call a review published 2 years ago a classic, but computation is so central to all we do that we felt like discussing this would an ideal way to update ourselves and possibly choose the next CLASSICS to read.

As summarized by Patrick Gemmell (famous unbeleiver and Facebook-denier): Algorithms are analysed in terms of complexity – roughly speaking, the amount of time one may have to wait for the algorithm to produce a solution. In practice, algorithm designers have usually found that the problems their algorithms address fall into two groups – those that can be solved efficiently (polynomial time) and those that are tougher. Though it is technically an open question as to whether all search problems can be solved efficiently (shortest path, partition into pairs, min-cut, Euler path, and linear programming can) it is thought that some problems (e.g. travelling salesman, partition, clique, Hamiltonian path, integer linear programming) cannot. This belief is, roughly speaking, formalized as the P != NP conjecture.

The class NP denotes the class of search problems. P denotes the class of search problems that can be solved efficiently. If P = NP life would be all rainbows and unicorns. (Aaronson has spoken about this science fiction scenario). Probably P != NP and life will remain tough.

People are particularly interested in the NP-complete problems. These problems are the same in the sense that if you can show one of them can be solved efficiently then you can show that they all can, and furthermore that P = NP. (Note, a new tough problem is shown to be NP-complete by providing a reduction – an efficient way of turning the input of one problem into an equivalent input for another; Also note, most problems that have not been classified as P or NP-complete are total, and have the property that a solution must exist for all given inputs e.g. prime factorization.)

Considering complexity leads to some interesting and quite philosophical questions. For example, in economics the notion of Nash equilibrium is often used. Traditionally, rational agents are conceived as playing games and coming to situations whereby all players are doing as well as they can do, given the action of others. Papadimitriou suggests that “because groups of players are supposed to attain this pattern of play in real life, there should be a way to simulate [the equilibrium finding] process on a computer in less than astronomical time.” Unfortunately, recent results have shown that finding Nash equilibria is a PPAD-complete problem, and therefore unlikely to be efficiently solvable. So, even if one is willing to accept that economic agents are behaving rationally (and there are plenty of reasons why they might not be) it is not clear quite how agents could be solving the difficult problem of finding Nash equilibrium in general. If algorithmic processes are not able to find a solution efficiently, why should we expect a human being to be able to do so, even if she is supported by an IT department and an army of Oxford educated management consultants? Furthermore, why should we allow economic reasoning that assumes that agents can find Nash equilibria dominate public policy? I’m sure economists have discussed the pros and cons of game theory as a model in great depth, but perhaps not from the computational perspective.

In general, expecting the limitations of a computer apply to nature is a strange idea. Digital philosophers take things to the extreme by hypothesising the universe is a computer. Is the behaviour of nature subject to algorithmic constraints? Should scientific theories be efficiently implementable on a computer before they are taken seriously? This seems rather extreme because, despite its algorithmic universality, a Turing machine is a very particular kind of conceptual device, one that goes through a sequence of states. It’s not clear we want to use it everywhere. (Turing ‘52 did not worry about whether his reaction diffusion systems were efficiently computable on a Turing ‘36 machine – computers were used to approximate the equations in the ‘52 paper but we don’t remember Turing confusing the world with a computer.) Furthermore, in the case of Nash equilibria, there is an empirical question of whether the particular games agents are supposed to play are necessarily tough to crack. Perhaps they are quite easy? Perhaps we play the same games over and over again, and the solutions are intuitive to us? I wonder if Economists have any idea?

A final interesting aspect of this paper was the discussion of the MWU (multiplicative weight update) algorithm. This very simple algorithm – change the strategy you play in proportion to its past performance and a learning parameter – is apparently able to find solutions to some fairly challenging problems. Moreover, given the standard assumptions of population genetics (weak selection, infinite populations, no linkage, etc.) a computation analysis of the Wright-Fisher equations seems to suggest that idealised populations play a coordination game whose dynamics are described by the MWU strategy. This maximizes the entropy of the allele distributions at a particular locus. This leads to the idea that maintaining diversity is a large part of what evolution is about. The analysis of diploid populations is used as a potential explanation for a discrepancy between theoretical and observed heterozygosity seen in populations.

Again, this application of algorithms does appear to raise some interesting questions. But we are not sure if there really is an excess of heterozygosity – based on our experience of biology we think it might not be so easy to find the appropriate sites to support such conclusions. We also wondered, how, when starting from same assumptions, do population genetics and the computational people come to different conclusions about heterozygosity? One group is wrong or there are some matters of taste involved. To be really happy we want to see the classical approximations and reasoning side by side against the new computational perspective. For this reason we’ll probably read Chastain E, et al. (2014) Algorithms, games, and evolution. Proc Natl Acad Sci USA 111(29):10620–10623 next. Papapadimitriou is a co-author so his photo can stay for some weeks.

It will be 9AM March 9th.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: