Summary: Bell (1964) On the Einstein-Podolski-Rosen paradox
[Summary by Søren Riis]
On Friday 8th April, we discussed the paper “On the Einstein-Podolski-Rosen paradox” Physics 1 (3), 195 (1964) by John Bell, where he introduced what later became known as Bell’s inequality.
One of the remarkable features of quantum physics is that a locally realistic theory cannot always describe measurements on spatially separated systems. Correlations that are allowed within the framework of a locally causal theory are usually referred to as classical.
The point is that quantum mechanics does not always result in “classical” correlations, and this can be quantified by the violation of Bell’s inequalities. These are constraints that have to be satisfied by any classical correlations.
In the meeting, we went over Bell’s paper. I was already familiar with Einstein Podolski Rosen’s 1935 paradox (the EPR paradox), and I have also tried to get my head round Bohr’s less known response (also from 1935) to the EPR paper. Also, I read various modern presentations of Bell’s inequality, but I never read Bell’s original paper.
I think John Bell’s interpretation of quantum physics is somewhat unclear. In Quantum Gravity 2, pp. 611 (1981) he wrote:
“When the ‘system’ in question is the whole world where is the ‘measurer’ to be found? Inside, rather than outside, presumably. What exactly qualifies some subsystems to play this role? Was the world wave function waiting to jump for thousands of millions of years until a single-celled living creature appeared? Or did it have to wait a little longer for some highly qualified measurer — with a Ph.D.?”
Without going into the technical details a (simplified) way to think of the problem is that Alice and Bob are performing measurements of so-called entangled particles at two spatially separated places. The entanglement ensures that there is 100% correlation between Alice’s result of a measurement on her particle and the corresponding measurement on Bob’s particle, provided they measure the same quantity.
In the setup for Bell’s inequality, Alice can choose between 3 measurements A, B and C. and Bob also has a choice between three measurements A, B and C.
Expressed somewhat crudely (and stripping away further details), quantum physics predicts that we can have non-transitivity, so:
- the result of A is strongly correlated with the result of B, AND
- the result of B is strongly correlated with the result of C YET
- the result of A is uncorrelated with the result of C.
To dramatise the “essence” in the situation further, imagine that Alice and Bob repeatedly chose between 2 measurements among A, B and C. After 100s of repeated experiments they notice that
- if they measure A and B they always get the same result;
- if they measure B and C they always get the same result.
But to their utter amazement, they notice that
- if they measure A and C they always get two different (opposite) results.
There have been various attempts to interpret quantum physics to make sense of such “impossible” correlations.
Within the last ten years, Bohr’s Copenhagen interpretation has lost its status as the primary interpretation, and an increasing number of physicists support wild interpretations like Everett’s many-worlds interpretation.
In my dramatised ERP thought experiment there is no reason to believe Alice and Bob would conclude that they split into different copies of themselves. The way I see the problem (which I think is the essence of the Copenhagen interpretation), it seems more natural that Alice and Bob find that the world as a whole follows some mathematical laws, so even their choice of measurements is a part of this mathematical pattern.
From this view, it is not just the two particles that are entangled. It seems that they would be forced to conclude that their choice of which measurement to perform would be similarly entangled in agreement with some underlying mathematical principles that are constraining the universe as a whole.
Summary: Papadimitriou (2014) Algorithms, complexity, and the sciences
[Summary by Jotun Hein]
On Wednesday February 10th 10AM we discussed Algorithms, complexity, and the sciences by Christos Papadimitriou published in PNAS 2014. (available here)
This is in principle an outlier, since you can’t call a review published 2 years ago a classic, but computation is so central to all we do that we felt like discussing this would an ideal way to update ourselves and possibly choose the next CLASSICS to read.
As summarized by Patrick Gemmell (famous unbeleiver and Facebook-denier): Algorithms are analysed in terms of complexity – roughly speaking, the amount of time one may have to wait for the algorithm to produce a solution. In practice, algorithm designers have usually found that the problems their algorithms address fall into two groups – those that can be solved efficiently (polynomial time) and those that are tougher. Though it is technically an open question as to whether all search problems can be solved efficiently (shortest path, partition into pairs, min-cut, Euler path, and linear programming can) it is thought that some problems (e.g. travelling salesman, partition, clique, Hamiltonian path, integer linear programming) cannot. This belief is, roughly speaking, formalized as the P != NP conjecture.
The class NP denotes the class of search problems. P denotes the class of search problems that can be solved efficiently. If P = NP life would be all rainbows and unicorns. (Aaronson has spoken about this science fiction scenario). Probably P != NP and life will remain tough.
People are particularly interested in the NP-complete problems. These problems are the same in the sense that if you can show one of them can be solved efficiently then you can show that they all can, and furthermore that P = NP. (Note, a new tough problem is shown to be NP-complete by providing a reduction – an efficient way of turning the input of one problem into an equivalent input for another; Also note, most problems that have not been classified as P or NP-complete are total, and have the property that a solution must exist for all given inputs e.g. prime factorization.)
Considering complexity leads to some interesting and quite philosophical questions. For example, in economics the notion of Nash equilibrium is often used. Traditionally, rational agents are conceived as playing games and coming to situations whereby all players are doing as well as they can do, given the action of others. Papadimitriou suggests that “because groups of players are supposed to attain this pattern of play in real life, there should be a way to simulate [the equilibrium finding] process on a computer in less than astronomical time.” Unfortunately, recent results have shown that finding Nash equilibria is a PPAD-complete problem, and therefore unlikely to be efficiently solvable. So, even if one is willing to accept that economic agents are behaving rationally (and there are plenty of reasons why they might not be) it is not clear quite how agents could be solving the difficult problem of finding Nash equilibrium in general. If algorithmic processes are not able to find a solution efficiently, why should we expect a human being to be able to do so, even if she is supported by an IT department and an army of Oxford educated management consultants? Furthermore, why should we allow economic reasoning that assumes that agents can find Nash equilibria dominate public policy? I’m sure economists have discussed the pros and cons of game theory as a model in great depth, but perhaps not from the computational perspective.
In general, expecting the limitations of a computer apply to nature is a strange idea. Digital philosophers take things to the extreme by hypothesising the universe is a computer. Is the behaviour of nature subject to algorithmic constraints? Should scientific theories be efficiently implementable on a computer before they are taken seriously? This seems rather extreme because, despite its algorithmic universality, a Turing machine is a very particular kind of conceptual device, one that goes through a sequence of states. It’s not clear we want to use it everywhere. (Turing ‘52 did not worry about whether his reaction diffusion systems were efficiently computable on a Turing ‘36 machine – computers were used to approximate the equations in the ‘52 paper but we don’t remember Turing confusing the world with a computer.) Furthermore, in the case of Nash equilibria, there is an empirical question of whether the particular games agents are supposed to play are necessarily tough to crack. Perhaps they are quite easy? Perhaps we play the same games over and over again, and the solutions are intuitive to us? I wonder if Economists have any idea?
A final interesting aspect of this paper was the discussion of the MWU (multiplicative weight update) algorithm. This very simple algorithm – change the strategy you play in proportion to its past performance and a learning parameter – is apparently able to find solutions to some fairly challenging problems. Moreover, given the standard assumptions of population genetics (weak selection, infinite populations, no linkage, etc.) a computation analysis of the Wright-Fisher equations seems to suggest that idealised populations play a coordination game whose dynamics are described by the MWU strategy. This maximizes the entropy of the allele distributions at a particular locus. This leads to the idea that maintaining diversity is a large part of what evolution is about. The analysis of diploid populations is used as a potential explanation for a discrepancy between theoretical and observed heterozygosity seen in populations.
Again, this application of algorithms does appear to raise some interesting questions. But we are not sure if there really is an excess of heterozygosity – based on our experience of biology we think it might not be so easy to find the appropriate sites to support such conclusions. We also wondered, how, when starting from same assumptions, do population genetics and the computational people come to different conclusions about heterozygosity? One group is wrong or there are some matters of taste involved. To be really happy we want to see the classical approximations and reasoning side by side against the new computational perspective. For this reason we’ll probably read Chastain E, et al. (2014) Algorithms, games, and evolution. Proc Natl Acad Sci USA 111(29):10620–10623 next. Papapadimitriou is a co-author so his photo can stay for some weeks.
It will be 9AM March 9th.
Summary: Einstein, Podolsky & Rosen (1935), Can Quantum-Mechanical Description of Physical reality Be Considered Complete
David Emms summarised today’s discussion:
A very interesting paper to read and for one that was foundational to the future understanding of Quantum Mechanics it was also quite straight-forward to follow. Despite being considered one of the founders of QM thanks to his paper on the photo-electric effect in 1905, Einstein was never quite happy with the theory, famously declaring “God doesn’t play dice” (to which Bohr, the main proponent of the Copenhagen interpretation, replied “Stop telling God what to do with his dice!”) The EPR paper cut to the heart of the matter by pinpointing an aspect of quantum theory that appears so against our physical intuition (entanglement). At the time the paper was written the Heisenberg uncertainty principle was known, but generally thought of in a physically intuitive way whereby making a measurement of one property physically disturbs the system and thereby causes another property to become uncertain, for example position and moment. The EPR paper lead to the realisation that things were stranger than that. They showed that the uncertainty principle applied even without a particle being disturbed by a measurement on the particle itself. This was done by way of a thought experiment using a pair of particles that were entangled (though they didn’t use the term themselves this was the paper that first discussed this distinctly quantum behaviour).
The paper begins by arguing that a ‘complete’ theory must be able to account for every element of physical reality. And that an ‘element of reality’ is any physical quantity that can be predicted with certainty without disturbing the system. They recap that if the momentum of a particle is known in QM then its position is unknown and vice-versa. Hence, when the momentum is known then the position has no physical reality. This applies to any observables in QM for which their operators don’t commute. Thus, if it turns out that the momentum and position of a particle both simultaneously had ‘physical reality’ then QM can’t be complete.
Setting up the thought experiment, they allow a pair of particles (particle 1 and 2) to interact and then become separated. QM provides the wave function for the complete system but doesn’t give the individual state of either of the two particles. Two different measurements of particle 1 leave particle 2 (which no longer interacts with particle 1) in two different states. However, no real change has occurred to the system. Thus, as they see it, it is possible to assign two different wave functions to the same reality. This is discussed, and the maths worked through, in the context position and momentum of a pair of particles. They demonstrate that (as they see it) they are able to know both properties of a particle by carrying out one of the two measurements on the other, entangled particle. Thus, both properties (position and momentum) are known with certainty and so are elements of physical reality. They are not both given by QM and so QM must be incomplete.
The view of Einstein, Podolsky and Rosen was that QM is incomplete and that what is often referred to as a ‘hidden variables’ theory is required. The QM resolution of the EPR paradox is that the principle of locality or the concept of realism has to be abandoned. The violation of locality may seem to be impossible because it could in turn cause problems for causality since signals appear to travel faster than the speed of light. In fact, it is impossible to construct a situation that uses quantum entanglement to send information and so causality is not lost. John Bell in 1964 showed that there were inequalities linking the correlations between measurements that had to be obeyed by any local hidden variables theory and which were violated by QM. Subsequent experiments starting in the 1970s and with a notable one last year showed that these inequalities were violated and so a local hidden variables theory could no describe reality. Despite this, the various thought experiments devised by Einstein to challenge aspects of QM are rightly credited with strengthening the understanding of the theory and identifying those aspects that most strongly clash with our physical intuition about the world.
On Politics: Rousseau
[This is a summary of our discussion of chapter 15 in Alan Ryan’s On Politics. It was written by Ulla S. Koch]
We discussed J.-J. Rousseau for half an hour and came to the conclusion that nobody understood him, least of all he himself. He led what seems to have been a miserable personal life, loosing his mother before he even got to know her, abandoning all his five children in his turn. He had had knack for making enemies and his books were burned in his home town of Geneva. Eventually he went mad. However, he was a gifted writer and composer and many of his ideas are today so engrained in our common consciousness that we take them for granted (p. 533).
In the Social Contract (1762) he outlined his idea of a perfect society, which was not the product of a revolution but was modelled on the Roman Republic and ancient Sparta. Despite being a reactionary politcal thinker, he became the inspiration for radicals. He was part of the French enlightenment subscribing to it’s ideals of rationality and the impartiality of laws, but was at the same time its prime contemporary critic, since he emphasized the importance of human irrational feelings and antisocial urges in the creation of society. In Discourse on the Arts and Sciences (1750) he claims that contemporary man was overeducated and in all respects inferior to his noble, unlettered, forebears.
Like Plato, R. was an idealist who took his starting point in his belief that man by nature is good. Mankind is characterized by two fundamental qualities, a “love of self”, or urge for selfpreservation, and empathy (pitié) for his fellows (i.e. not innately bad as Hobbes surmised, or burdened with original sin). This means that if uncorrupted by life in modern society our natural tendency would be to help, or at least avoid harming, any other sentient being. This was a radical break with earlier especially Christian theorists, paving the way for the discussions of nature vs. nuture in modern psychology and sociology. The source of all the miseries of modern society is, according to his work Discours sur l’origine et les fondements de l’inégalité parmi les hommes (1755), private property rights (p. 547). The reason why a state exists at all is to prevent civil war, which is latent in society – not because of the instincts of natural man but because of socialized man and his yearning for property and the concomitant estrangement from his fellow man. States originally emerged when populations increased and makind accordingly was driven to coordinate and develop language and other means of social interaction and organization (p. 544).
Again like Plato, Rousseau sees education as the means to enabling man to preserve his innate goodness even in the face of the detrimental forces of society. In Emile ou de l’education he describes how to develop a boy into a man by allowing him to live as close as possible to his original innocent state of nature. A tutor can gently guide and mould a young man by setting the scene for him to make his own discoveries and deductions. But the best solution remains to adopt the life of a hermit, free from competition, envy fear and admiration (he could also try attending the mindfulness seminars at Oxford).
With his break with the concept of original sin led him towards theories that approch the theory of evolution (p. 540), mankind was somewhere on the continuum along with the great apes. He builds on the ancient idea that mankind has developed through a series of stages (p. 545). What is interesting is that he sees technological advances, like farming and metal working as the drivers for change (and hence corruption) – as a modern day archaeologist I concur.
His ideas on the evolution of human society, on pedagogical principles (children should be treated as children) the importance of external factors on personality formation are at least part of his legacy.
Summary: Fisher (1925) Theory of Estimation
[Summary by Jotun Hein]
Again, extremely worthwhile!!
Fisher clearly had a series of pioneering papers 1918 and the following years/decades and if it is not totally the one we discussed today is the one that introduces most new concepts, but that is not so essential: A series of ideas that since found their way into textbooks are (almost) first introduced in these papers.
He seems to suffer a bit by not having an established framework of probability and on the first page he has what seems a very convoluted formulation of the law of large number, but manages to have 6 vectors to define it. Eventually stating that if a probability vector in the hidden model model is close to the true model, then the probability fractions in the finite observations from the two models will also be close to each other. And then he states that this could probably be proven, but decides not to!!
He then moves on to define estimation, which is a mapping from observations to parameter space and he then exemplifies the problems you can get into if trying to estimating the mid-point in the Cauchy-distributions by the empirical means. And he doesn’t refer to Cauchy either. Then consistent esimators (converges to the true parameter as observation number increases) and if you ask for more you also wan’t efficiency (the asymptotic variance is minimal). He goes on to prove that the maximum likelihood estimator is efficient under mild conditions. He proves that two efficient estimators will asymptotic correlation of 1, which must mean one is an afine function of the other as far as I can see. He shows how you can rectify the bias (converging to the wrong value) of an efficient estimator. He defines the Fisher Information matrix but doesn’t call it that. Very interesting discussion of the inherent noise in the data (unavoidable) and error cause by inefficient estimators (avoidable).
Clearly one issue is the contrast between asymptotic properties and properties of estimators for finite data.
The sufficient statistics (a function of the data that contains ALL the information about the parameter) and ancilliary statistics (functions of the data that contains NO information about the data). Nothing about minimally sufficient statistics or the non-existance of maximally ancillary statistics.
I hope I didn’t mess up too much in my summary – it was very worth reading a recaptulated a series of key concepts in likelihood statistics. We got a bit tired the last 10 pages. The ½ page letter by Rev. Bayes was the wrong one so we will return to that.
The writing style is a bit peculiar: He only have 3 references and that is to himself. Nothing on hypothesis testing in this paper. He has some intuitive jumps and some places a curve or illustration would have helped, but in general very enjoyable. He uses the phrase “Without ancedent knowledge” – almost sounds like “Don’t use the word prior, please”