Home » Reading clubs » Classic » Summarizing a Lecture by Von Neumann given in 1948 on the theory of Automata

Summarizing a Lecture by Von Neumann given in 1948 on the theory of Automata

[Summary written by Patrick Gemmell]

We read The General and Logical Theory of Automata, a lecture given by John Von Neumann at the Hixon Symposium, Pasadena, in September, 1948. (At about 40 pages, we can only assume the talk finished some time in 1949.)

In actual fact, the lecture was fairly entertaining from a historical perspective, and gives some sort of idea of the direction people thought computing, neuroscience, and AI might have been going in 1948.

First off, JVN (John Von Neumann) was keen to argue for an axiomatic black box approach to understanding biology and brains. In particular, he saw brains as collections of about 10 billion black boxes that emitted pulses and behaved in an essentially digital fashion. (The figure for the number of neurons is, as far as we know, roughly correct, though it must be said that neurons are often modelled in more detail, e.g. using the Hodgkin-Huxley model, for which the Nobel Prize was awarded in ‘63.) Amusingly, JVN described the computer as smaller than the brain “in the sense which really matters,” i.e. switching complexity. Today the computer is smaller in both senses (a modern A8 chip from an iPhone has about 1 billion switches) though tomorrow it may be the brain which will be smaller in the “only sense that matters.”

JVN also took time to contrast analogous machines (analogue computers, where numbers are represented by physical quantities such as current, rotation, or fluid) with digital ones. According to JVN, the main advantage of digital machines is that they can conduct calculations to whatever precision we design them to, while the analogous machines always have, in practice, a fairly limited signal to noise ratio. On the other hand, JVN pointed out that each step involved in a digital calculation must be correct or else errors can be very large: interestingly, JVN could think of no other human endeavour where the result essentially depends on a sequence of a billion steps. Nor could we, can you? Today most people seem to have forgotten about both of these interesting characteristics of digital computers and merely take them for granted.

In the talk, JVN was prescient about future developments in some ways but missed the mark in others. For example, JVN was clear that it is important to determine the number of steps needed in a computing procedure (a la computational complexity theory) rather than just whether a procedure can be computed in principle (a la computability theory/Turing ‘36). On the other hand, JVN though that an equally important development would be to take account of the non-zero probability of errors at intermediate steps in a computation. Today, algorithms are usually accompanied by descriptions of their complexity but no one really worries about the probability of failure, as this is taken account of using coding layers, transmission protocols, and hardware redundancy. JVN also thought that there was “no doubt” that one can design self repairing machines though today self-healing computers remain a daydream so that, again, we tend to rely on redundancy more than anything more organically inspired.

Towards the end of his talk, JVN discussed self reproducing automata and neuroscience. This was perhaps the most interesting part of the reading because JVN was opinionated, and may yet be proved right. [JVN’s stance seems to reflect his interest in natural science and appears in contrast to Turing’s, as well as the major thread of AI that dominated the post-cybernetics/ pre-connectionist eras.] First, JVN thought it unfortunate that computation was so closely connected to formal logic, which he thought of as a rigid, combinatorial field, divorced from the more cultivated area of number and analysis. Second, JVN though that, given biology’s success at dealing with abstract concepts, logic and computing would eventually borrow more from neuroscience than neuroscience would from formal logic. While there have been many advances on the logical side of computing since 1948, it does seem that today more excitement is being generated by deep learning or hierarchical Bayesian modelling than is being generated by symbolic AI; certainly, if an undergraduate were asked to solve JVN’s example problem of recognising triangles, they would be much more likely to train up a neural network than they would be to think about how to describe the attributes of geometric objects in logical terms. On the other hand, the majority of computing is not (yet!) AI, and logical/discrete techniques developed since JVN’s talk support the bulk of the big ideas in the computer science curriculum such as languages, automata, compilers, formal verification, database theory, operating systems, and so on.

JVN’s talk finished with a discussion of self replicating machines. Here JVN explained that all one needs to make self replicating machines is (a) a machine that fabricates the machine described on a blueprint, (b) a machine that duplicates blueprints, and (c) a machine that takes a blueprint, feeds it to a manufacturing machine (a), copies it in copying machine (b), and then shoves the newly copied blueprint into the newly minted machine. Here we were finally a little incredulous, and decided that JVN’s black box thinking had gone a little too far: now everything interesting was happening inside the black box. Perhaps we should read about the Universal Constructor next

Advertisement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: