[Summary written by Jotun Hein]
This paper cannot be called a classics paper, but it was informative and opinionated and led us to the key papers in the field in no time The paper consists of two quite separate parts:
A – a classification of the computational discrete world into 5 possible countries. I found it a bit pointless and I could not see the relationship between the first 2 countries and the last 3.
ALGORITHMICA. Here NP=P
HEURISTICS here problems are polynomial averaged over all possible data sets of a given size, but intractable worst case.
PESSILAND [Danes might swap E & I] here there are hard average cases but no one-way (encryption) functions
MINICRYPT here one way functions exist, but public encryption is impossible (must mean that the function can be inverted)
CRYPTOMANIA and here public-key encryption is possible.
B goes through the original definition of Average Complexity by Levin. Unfortunately no real example is given on a problem where there is a difference between average and worst case complexity. The ideas are quite understandable. An algorithm can only have average polynomial complexity, if the data sets where it isn’t polynomial shrinks sufficiently fast as their size grows. One tricky thing is that complexity is not defined in terms of the algorithms at hand but only in terms of the problem. There are a lot of things not discussed here like how polynomial transformations of a problem to another skews the distribution on the possible data sets.
I think complexity based on distributions would be interesting to pursue in this case: if you generate data from the coalescent with mutation on a finite string. If you have long strings/low mutation rate you are in the perfect phylogeny domain where a linear algorithm exists. If you make mutation rate infinite, you have uniform distribution on all data sets and the worst case complexity is NP-Complete without considering the distribution.