Home » Articles posted by cronjager (Page 2)

Author Archives: cronjager

Blog archives

Here we collect our writing on various topics from our day-to-day work and our reading clubs.

About the Humanities Book Club

This book club is an endeavour to broaden our horizons and critically engage with good writing from across the humanities. We intend to go through a book per term, and tend to meet roughly once every 3rd week to discuss new sections of whatever book we are currently going through. Past books that we have read include:

  • The General Theory of Employment, Interest and Money, by John Maynard Keynes
  • The Qurʼān – A New Annoteted Translation, by Arthur J. Droge
  • On Politics, by Alan Ryan
  • Capital in the Twenty-First Century, by Thomas Piketty

The size of the reading group is capped at 6 persons. Current members include Jotun Hein, Mathias Cronjäger, James Anderson (former DPhil student), and Eddie Rolls (former summer school student).

Summing up 10 classic papers on Computation

For about 3 years we have every 3rd week read a classic paper in physics, statistics, probability theory, information theory, game theory and computation. So far we have gone through about 40-50 papers and the papers have generally been superb.

Søren Riis (Queen Mary, London) and Jotun Hein gave a talk to summarize 10 of the papers that fall within computation/complexity theory. The papers are Babbage, Turing, McCullough/Pitt, von Neumann, Cook, Karp, Smale, Shor, Impagliazzo, Papadimitriou.

The slides of his presentation are available here

Summary: McCullough and Pitts (1943) A Logical Calculus of the Ideas Immanent in Nervous Activity

This a most peculiar paper that must be the initial Neural Networks paper and thus is extremely famous.

The paper is not easy to read due to a lot of terms taking from a 1938 textbook by Rudolf Carnap and I (Jotun) could well have missed some finer points, but I think I got the the basic ideas: We have discrete time, only boolean values, and the fundamental unit is the neuron and has input from many neuron and all that matters is how many 0s and 1s it receives (not their order). From this networks evaluating most logical expression can be designed. One can add extra “carrier neurons” that copies the content of a neuron and effectively allows a neuron to receive input from another neuron several time steps back. Cycles needs special treatment.

There is no Hebbian (1949) learning in this paper, so the parameters of each neuron is preset.

The paper is motivated by the brain but the M-P states that model should not be interpreted to literally as representing how the brain works. Nevertheless they end the paper with a series of very specific hopes for this model lile (131):
“To psychology, however defined, the net would contribute all that could be achieved in that field – even if the fields were pushed to ultimate psychic units or ‘psychons,’ for a psychon can be no less than the activity of a single neuron. Since that activity is inherently propositional, all psychic events have an intentional, or semiotic character. The ‘all-or-none’ character of these activities, and the conformity of their relations to those of the logic of propositions, insure that the relations of those of the logic of propositions insure thtat the relations of psychins are those of the the two-valued logic of propositions. Thus in psychology, introspective, behavioristic and physiological, the fundamental relations are those of two-value logic”

Thus M-P is quite confident that their model represent some sort physical reality of the brain. They also mention tinitus, paraesthesias, hallucinations, delusions and disorientations (p131) so they are keen to move on the applicability of their network theory.

I find it fascinating to read the papers from 30s, 40s and 50s by von Neumann, Wiener, MP and their wild ambition and optimism stemming from the belief that a biological machine, such as the brain, is just the right kind of wired system that can be exactly modelled. It has proven a lot harder and it is still an open question what a biological machine is and how detailed physics needs to be retained to for instance modeling the neurone. Ganti (1971) used the term Soft Machine that is a nice metaphor, but seems to have contributed little.

Summary: Cook (1971) The Complexity of Theorem-Proving Procedures

[Summary written by Carlo Maria Scandolo]

A decision or recognition problem concerns the issue of determining if an element (input string) is in a given set (language).
We want to study the complexity of such problems. We can do this by considering ordinary (multi-tape) Turing machines as well as
non-deterministic Turing machines. There are two main results in the paper concerning propositional logic:


The first states that every problem that can be solved in polynomial time by a non-deterministic Turing machine can be reduced to checking whether a given logical proposition expressed in disjunctive normal form is a tautology. In more modern terminology, we would say that checking if a logical proposition in the disjunctive normal form is a tautology is NP-complete. In other words, if we solve the tautology problem, we also solve also all problems in the NP class, namely all problems that can be
solved in polynomial time by a non-deterministic Turing machine.

The second result establishes that checking whether a logical proposition is a tautology in its disjunctive normal form has the same degree of difficulty of the following problems:

  1. checking whether a proposition (expressed in any form) is a
    tautology;
  2. checking whether a proposition in disjunctive normal form where each disjunct has at most three conjunctions is a tautology;
  3. given two finite undirected graphs, checking whether the former is isomorphic to a subgraph of the latter.

To sum up, in general, it is a hard problem to determine if a given
logical proposition is a tautology even if it is expressed in disjunctive normal forms. Another consequence is that it is fruitless to develop a polynomial algorithm for solving the subgraph isomorphism decision problem on a deterministic Turing machine: it is isomorphic to the tautology problem.

Bibliography
S. A. Cook, The Complexity of Theorem-Proving Procedures, Proceedings
of the third annual ACM symposium on theory of computing, 1971

Summarizing a Lecture by Von Neumann given in 1948 on the theory of Automata

[Summary written by Patrick Gemmell]

We read The General and Logical Theory of Automata, a lecture given by John Von Neumann at the Hixon Symposium, Pasadena, in September, 1948. (At about 40 pages, we can only assume the talk finished some time in 1949.)

In actual fact, the lecture was fairly entertaining from a historical perspective, and gives some sort of idea of the direction people thought computing, neuroscience, and AI might have been going in 1948.

First off, JVN (John Von Neumann) was keen to argue for an axiomatic black box approach to understanding biology and brains. In particular, he saw brains as collections of about 10 billion black boxes that emitted pulses and behaved in an essentially digital fashion. (The figure for the number of neurons is, as far as we know, roughly correct, though it must be said that neurons are often modelled in more detail, e.g. using the Hodgkin-Huxley model, for which the Nobel Prize was awarded in ‘63.) Amusingly, JVN described the computer as smaller than the brain “in the sense which really matters,” i.e. switching complexity. Today the computer is smaller in both senses (a modern A8 chip from an iPhone has about 1 billion switches) though tomorrow it may be the brain which will be smaller in the “only sense that matters.”

JVN also took time to contrast analogous machines (analogue computers, where numbers are represented by physical quantities such as current, rotation, or fluid) with digital ones. According to JVN, the main advantage of digital machines is that they can conduct calculations to whatever precision we design them to, while the analogous machines always have, in practice, a fairly limited signal to noise ratio. On the other hand, JVN pointed out that each step involved in a digital calculation must be correct or else errors can be very large: interestingly, JVN could think of no other human endeavour where the result essentially depends on a sequence of a billion steps. Nor could we, can you? Today most people seem to have forgotten about both of these interesting characteristics of digital computers and merely take them for granted.

In the talk, JVN was prescient about future developments in some ways but missed the mark in others. For example, JVN was clear that it is important to determine the number of steps needed in a computing procedure (a la computational complexity theory) rather than just whether a procedure can be computed in principle (a la computability theory/Turing ‘36). On the other hand, JVN though that an equally important development would be to take account of the non-zero probability of errors at intermediate steps in a computation. Today, algorithms are usually accompanied by descriptions of their complexity but no one really worries about the probability of failure, as this is taken account of using coding layers, transmission protocols, and hardware redundancy. JVN also thought that there was “no doubt” that one can design self repairing machines though today self-healing computers remain a daydream so that, again, we tend to rely on redundancy more than anything more organically inspired.

Towards the end of his talk, JVN discussed self reproducing automata and neuroscience. This was perhaps the most interesting part of the reading because JVN was opinionated, and may yet be proved right. [JVN’s stance seems to reflect his interest in natural science and appears in contrast to Turing’s, as well as the major thread of AI that dominated the post-cybernetics/ pre-connectionist eras.] First, JVN thought it unfortunate that computation was so closely connected to formal logic, which he thought of as a rigid, combinatorial field, divorced from the more cultivated area of number and analysis. Second, JVN though that, given biology’s success at dealing with abstract concepts, logic and computing would eventually borrow more from neuroscience than neuroscience would from formal logic. While there have been many advances on the logical side of computing since 1948, it does seem that today more excitement is being generated by deep learning or hierarchical Bayesian modelling than is being generated by symbolic AI; certainly, if an undergraduate were asked to solve JVN’s example problem of recognising triangles, they would be much more likely to train up a neural network than they would be to think about how to describe the attributes of geometric objects in logical terms. On the other hand, the majority of computing is not (yet!) AI, and logical/discrete techniques developed since JVN’s talk support the bulk of the big ideas in the computer science curriculum such as languages, automata, compilers, formal verification, database theory, operating systems, and so on.

JVN’s talk finished with a discussion of self replicating machines. Here JVN explained that all one needs to make self replicating machines is (a) a machine that fabricates the machine described on a blueprint, (b) a machine that duplicates blueprints, and (c) a machine that takes a blueprint, feeds it to a manufacturing machine (a), copies it in copying machine (b), and then shoves the newly copied blueprint into the newly minted machine. Here we were finally a little incredulous, and decided that JVN’s black box thinking had gone a little too far: now everything interesting was happening inside the black box. Perhaps we should read about the Universal Constructor next