Sunday, January 24, 2010

Cycles of Explanation in Economics

In areas of science such as physics and biology our successive theories developed over time are better and better explanations of the observed facts. Think heliocentrism versus geocentrism, or natural selection versus Lamarckism. But in economics I suspect this is not always the case. Traumatic events like the Great Depression, or (hopefully) the current crisis can force a brief period of clarity. But over time economic forces will result in the mainstream replacement of the theory which most closely matches the observed facts of the previous crisis with a theory which maximizes short term profits for powerful actors. The time-frame over which this shift occurs is probably a generation, in which those with first-hand memories of the previous crisis leave power.

Here are two exhibits for my point. The first is a quote I have been re-sending for years:

"When business in the United States underwent a mild contraction in 1927, the Federal Reserve created more paper reserves in the hope of forestalling any possible bank reserve shortage. More disastrous, however, was the Federal Reserve's attempt to assist Great Britain who had been losing gold to us because the Bank of England refused to allow interest rates to rise when market forces dictated (it was politically unpalatable). The reasoning of the authorities involved was as follows: if the Federal Reserve pumped excessive paper reserves into American banks, interest rates in the United States would fall to a level comparable with those in Great Britain; this would act to stop Britain's gold loss and avoid the political embarrassment of having to raise interest rates.

The "Fed" succeeded; it stopped the gold loss, but it nearly destroyed the economies of the world in the process. The excess credit which the Fed pumped into the economy spilled over into the stock market -- triggering a fantastic speculative boom. Belatedly, Federal Reserve officials attempted to sop up the excess reserves and finally succeeded in braking the boom. But it was too late: by 1929 the speculative imbalances had become so overwhelming that the attempt precipitated a sharp retrenching and a consequent demoralizing of business confidence. As a result, the American economy collapsed. Great Britain fared even worse, and rather than absorb the full consequences of her previous folly, she abandoned the gold standard completely in 1931, tearing asunder what remained of the fabric of confidence and inducing a world-wide series of bank failures. The world economies plunged into the Great Depression of the 1930's."


Gold and Economic Freedom, Alan Greenspan, 1967.

Here the Greenspan of 1967 believed holding Fed rates too low caused a bubble that could not be cleaned up afterwards. But the Greenspan of the late 1990s and again early 2000s no longer believed low rates caused bubbles, or that a bubble could be too large for the Fed to clean up afterwards. Why the shift? I think the earlier theory fits the facts of the 1920-30s (and our current situation now) better. But note his successor Ben Bernanke still denies low rates cause bubbles.

A second exhibit is the presentation I wrote in 2006 and posted as Black Swans in the Market in 2008. I showed that any Statistics 101 student could easily demonstrate that daily stock prices changes are not normally distributed. This makes people like the CFO of Goldman Sachs, who claimed in 2007 "We were seeing things that were 25-standard-deviation events, several days in a row" look extra-silly. Why were they using such bad models? Is finance in general so ignorant of basic statistics[*]? No, they are simply choosing models which maximize their short-term profits -- and bonuses.

[*] OTOH, a Google Books search on "which shows a histogram of the daily returns on Microsoft stock" shows that as of 2005 Brealey et. al. were still peddling this nonsense to unsuspecting MBA students in their Principles of Corporate Finance and related texts. A 2005 edition apparently shifted the MSFT date range from 1986-97 to 1990-2001 (omitting the crash of 1987!). Later editions don't hit on the search terms; no idea if they have cleaned up their act.

Sunday, January 3, 2010

Multi-Core Ant Colony Optimization for TSP in Go

Go is a new statically typed, garbage collected, concurrent programming language. It is native compiled and offers near-C performance. Like Erlang it uses a CSP model of lightweight processes ("goroutines") communicating via message passing. Messages can be synchronous (unbuffered) or asynchronous. Go supports multi-core processors.

On my usual ACO TSP test case Go is the best performing language so far. On a single core it was 2.8x slower than C, beating the previous best Shedskin Python. Go's speedup from 1 to 2 cores was 1.9, matching previous best Erlang. With 4 cores Go exceeds C's single core performance -- the first tested language to achieve this goal.



Go's lightweight processes are scheduled onto OS threads by the Go runtime system. This allows the system to support many more processes. I was able to run ACO TSP with 100,000 ant processes. The runtime system scheduler appears to treat these processes somewhat like futures, lazily scheduling them when another process is blocking on receipt of a message from a shared channel. This is inefficient for the type of parallel processing used in ACO (many seconds of parallel computation ending with a single message send), as only a single process (and thus a single core) is active at a time. This is a known issue, and the simple solution suggested on the golang-nuts mailing list is to add runtime.Gosched() calls to yield the processor. For my case this was insufficient, and adding additional heartbeat messages to each process helped force maximal multi-core usage.

See here for code and more details.