Home » Reading clubs (Page 6)

Category Archives: Reading clubs

Keynes: General Theory, chapters 4-6

[Summary by Jotun Hein]

It is 35 pages but again is not easy

Chapter 4 “The Choice of Units”

The chapter is quite imprecise but JMK has some amusing formulations like ”To say that net output today is greater, but prize level lower, than then years ago or one year ago, is a proposition of similar character to the statement that Queen Victoria was a better queen but not a happier woman than Queen Elizabeth – a proposition not without meaning and not without interest, but unsuitable for differential calculus”

Chapter 5 “Expectation as Determining Output and Employment”

Keynes presents a very common sensical argumentation for the importance of psychology in economics. He has a distinction between short-term and long-term expectation where the former that is clear, but again very hard to measure exactly.

Chapter 6 “The Definition of Income, Savings and Investment”

Clearly a major concern of the book is how to keep up consumption, but I what you earn but don’t consume is Saving or Investment, but I can’t see the difference between the two. I can see a difference between putting in the bank and under the madras but I am not sure it coincides with this distinction.

Appendix on User Cost

For an outsider it is hard to understand why there is so much contention about seemingly simple concepts. And I haven’t ready anything I would call ingenious. And the lack of mathematics surprises me. And few well chosen examples would really have helped the book. Today we could easily have made a few simulations but this book came out the same year as Turing’s most famous publication and that was not an option.

This book might be much clearer upon 2nd reading if I ever goet so far.

Next time is October 13th 5.30 PM UK time and we will meet at University College, Oxford and we will discuss chapters 7-9 (The Meaning of Savings and Investment + The Propensity to Consume I+II)

Keynes: General Theory, introduction and chapters 1-3

[Summary by Jotun Hein]

It is 37 pages in a very large font. In principle it should be easy but there was a lot of high level description of the Classical Theory that I didn’t understand too well. This seems like an ideal book for this kind of intense discussion groups and a lot of issues became clear as we went through the pages. I am sure even more would dawn upon us by reading it a send time. But it is also a very tough read where we read single sentences aloud for each other and had quite a discussion of them.

Chapter 1

the general theory is 1 page and explains the word GENERAL and that his book is in contrast to CLASSICAL theory. I am surprised the JMK can describe it as one thing. Not that I know much about it, but there are Smith, Malthus, Richardo, Mills, Marx, Veblen, Marshall,….

Chapter 2

The postulates of the classical theory is 19 pages and starts with 2 tenets that JMK says characterize the CLASSICAL:

“I. The wage is equal to the marginal product of labour
II. The utility of the wage when a given volume of labour is employed is equal to the marginal disutility of that amount of employment.”

We would like to have had 2 illustrations here, showing how this defined two equilibrium points. JMK asserts that CLASSICAL does not allow for involuntary unemployment, but does have frictional [due to a shift in production from say potatoes to bananas] and voluntary when a worker prefers not work since salary too low. I think JMK then show that certain empiricial consequences of I-II above [basically how the equlibrium point shifts] are generally violated.

There are lots of fun and interesting comments, like that Richardo asserted that political economy could make no statement about absolute level of production, but only about value distribution. An interesting discussion about value-wage versus money-wage. JMK uses a Prof. Pigiu to illustrate all that is wrong with CLASSICAL. JMK say the CLASSICAL are as foolish as a Euclidian Geometer that faced with non-Euclidian Geometry would yell at the lines for not be being parallel in the right way.

Chapter 3

The principle of Effective Demand is 12 pages devoted to a preview of the whole book and the first simple equations appear. JMK defines aggregate supply and demand function defines a series of properties of the system. We were a bit tired at this point and i should read it again.

More enjoyable sentences like ” It could only live on furtively, below the surface, in the underworlds of Karl Marx, Silvio Gesell or Major Douglas.”

September 22nd time we take Book II [Definitions and Ideas] chapter 4-6 which is about 29 pages. I said we would be done by X-mas and somebody said that what the generals said in 1914 about WWI. But I hope this JMK wont last 4 years and have a similar loss of life.

Summary: Russell Impagliazzo (1995) “A personal view of average-case complexity”

[Summary written by Jotun Hein]

This paper cannot be called a classics paper, but it was informative and opinionated and led us to the key papers in the field in no time The paper consists of two quite separate parts:

A – a classification of the computational discrete world into 5 possible countries. I found it a bit pointless and I could not see the relationship between the first 2 countries and the last 3.

ALGORITHMICA. Here NP=P

HEURISTICS here problems are polynomial averaged over all possible data sets of a given size, but intractable worst case.

PESSILAND [Danes might swap E & I] here there are hard average cases but no one-way (encryption) functions

MINICRYPT here one way functions exist, but public encryption is impossible (must mean that the function can be inverted)

CRYPTOMANIA and here public-key encryption is possible.

B goes through the original definition of Average Complexity by Levin. Unfortunately no real example is given on a problem where there is a difference between average and worst case complexity. The ideas are quite understandable. An algorithm can only have average polynomial complexity, if the data sets where it isn’t polynomial shrinks sufficiently fast as their size grows. One tricky thing is that complexity is not defined in terms of the algorithms at hand but only in terms of the problem. There are a lot of things not discussed here like how polynomial transformations of a problem to another skews the distribution on the possible data sets.

I think complexity based on distributions would be interesting to pursue in this case: if you generate data from the coalescent with mutation on a finite string. If you have long strings/low mutation rate you are in the perfect phylogeny domain where a linear algorithm exists. If you make mutation rate infinite, you have uniform distribution on all data sets and the worst case complexity is NP-Complete without considering the distribution.

Molecular dynamics day

There will be this event in Oxford Monday 13th 1-6pm that one can attend for free: http://www.stats.ox.ac.uk/events/molecular_dynamics_day

Summary: Shor (1999) Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer

[Summary by Søren Riis]

On the morning of Wednesday 18th of May, a reduced select group met to discuss “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” P. W. Shor [1].

Finding the factors of a given large number is a fundamental problem in computing called factoring. Shor’s result is that factoring can be solved in roughly n^2 steps on a quantum computer (where n is the number of bits in the number we want to factor). In contrast, factoring requires exponentially many steps on an ordinary computer. A traditional computer is based on bits. Each bit can either be zero or one. A quantum computer instead uses q-bits. A q-bit can either take value zero, one or some superposition of zero and one.The way Shor’s algorithm works is that it first converts the problem of factoring into the problem of finding the period of a really long sequence. It is this issue that is the central quantum mechanical part of Shor’s algorithm. Once the period is found, the result can then be used to factorise the number.

Public key cryptography, i.e. the RSA algorithm we discussed in a previous meeting, relies on the assumption that factoring is computationally hard. RSA would be rendered insecure if Shor’s quantum factoring algorithm could be implemented.
After the meeting, I had some fun writing a simple python program on my laptop that can simulate Shor’s algorithm. Maybe at a later stage this program could be turned into a proper teaching tool


[1] P. W. Shor, “Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer,” SIAM Rev., vol. 41, no. 2, pp. 303–332, Jan. 1999.