Sunday, December 26, 2010

Precognition and 5%

Many areas of scientific research such as psychology, medicine, and economics make heavy use of tests of statistical significance. A popular significance level is 5%; this means that the experiment results are less than 5% likely to have occurred by random chance. While these tests are arguably valid and useful, with the wide availability of automated experimentation and computerized data mining they become ever easier to unintentionally and intentionally game.

One well-known mistake is to omit necessary corrections when using the same data set to test multiple hypothesis. If a single hypothesis is 5% likely to be true by random chance, then by testing 14 hypothesis against the same data set it is over 50% likely [1 - (1 - .05)^14 = 0.51] to find at least one false positive result. Besides correction factors, a common solution is to have two separate data sets, where one set is used for data mining for possible hypothesis, and a second set is used to verify the result. On a large scale, however, poor hypothesis can still pass this second filter. This is especially true when the hypothesis themselves are being automatically generated and not based on any previously known plausible physical mechanisms.

A related problem is known as the "file drawer effect": positive results are published, while negative results remain in the "file drawer". This creates a serial version of the problem above, where if the same hypothesis is tested 14 times then by random chance it is over 50% likely to be confirmed at least once. The negative results are never published. There is a movement to publish negative results, but this is also problematic because the negative results may be due to recognized poor experimental procedure. These problems reduce the usefulness of meta-studies, since they are summarizing and aggregating the results of positive studies without having accurate information about how many other negative studies were never published.

These issues are becoming more well known, as problematic results in areas such as clinical drug trials are found. A useful "canary in the coal mine" for statistical techniques is parapsychology. While confirmation of abilities such as precognition (seeing the future) is possible, it is much more likely to indicate that statistical standards for research and publication in a field (in this case psychology) have fallen too low. Recently the paper "Feeling the Future: Experimental Evidence for Anomalous Retroactive Influences on Cognition and Affect" by Daryl J. Bem was accepted for publication in a prominent psychology journal. Critics claim the author has made numerous mistakes, including those discussed above. Bem found subjects could predict a future result correctly 53.1% of the time (where random chance was 50%). But as Wagenmakers, etc. have have pointed out a similar large-scale test has been running for a long time: casino roulette. In European roulette the house edge is 2.7% [36 to 1 odds against winning; 35 to 1 payout], so gamblers with Bem's 3.1% edge would have cleaned out the casinos already. It is suspicious that in most studies which do find psychic abilities it hovers at the edge of the statistical significance value chosen for the study.

Bem's response to critics is amusing. A major part of his defense is appeal to authority: the referees of his paper accepted it for publication in a prominent journal, and his statistical techniques are commonly accepted in psychology. He defers to another author (Radin) for defense of the historical record of parapsychological research. Radin cites meta-studies and makes an even more dubious appeal to authority: the U.S. patent office.

Friday, December 17, 2010

Poor Pluto

The demotion of Pluto from planet to dwarf planet (or planetoid) provides another example of the themes of my previous posts. Like the definition of "species", the definition of "planet" is vague, man-made, and changes over time. Scientific authorities also alter the membership of the category based on new information. These changes make many people (especially children) upset. I suspect changes to the planets also cause cognitive dissonance due to their ancient connections to astrology, mythology, and religion. The unchanging nature of heavenly objects was a bedrock belief: "as sure as the sun will rise." Yes, Pluto was not one of the original planets visible to the naked eye, but like the others on discovery it was given the name of a Roman (formerly Greek) god. Today people don't consciously remember how much of our daily language is based on past mythology. My favorite example (besides the planets) is the English names of the days of the week:

Sunday : Sun
Monday : Moon
Tuesday : Tyr (Norse god)
Wednesday : Woden / Odin (Norse god)
Thursday : Thor (Norse god)
Friday : Freya (Norse goddess)
Saturday : Saturn (Roman god)

Sunday, December 12, 2010

Darwinism and Authority

American critics of evolutionary theory often prefer to use the term "Darwinism". In doing so they reveal the philosophical, psychological, and theological underpinnings of their world-view: the argument from authority
Appeal to authority is a fallacy of defective induction, where it is argued that a statement is correct because the statement is made by a person or source that is commonly regarded as authoritative.
Scientists don't believe the theory of evolution is correct because it was originally formulated by Charles Darwin, or because it was presented in his book "The Origin of Species by Means of Natural Selection".

Scientific presentations of new theories consist of two parts: the theory itself, and the gathered data which this theory explains. The availability of the gathered data means other scientists can check its validity and see whether it actually supports the proposed theory, or matches some other theory better. For example, the finches Darwin gathered in the Galapagos islands are actually still stored in the bird collection of the British Natural History Museum. Additionally, future scientists can gather additional data and perform new experiments to confirm, expand on, or refute Darwin's original theories. So far new gathered data has confirmed Darwin's basic theories, and vastly expanded on them (understandable, since Darwin was not even aware of Mendelian genetics, let alone DNA). But this new research is not being done to defend Darwin's legacy. Young scientists would be overjoyed to discover a radical new experimental finding or theory which overturns Darwin, and are feverishly looking for this. This very possibility of being wrong is what makes evolution a scientific theory.

An earlier post quoted Dawkins on how children are natural essentialists. I argue children are also, both by nature and nurture, believers in the argument from authority. Children want all statements to have invariant mappings to the categories "true" or "false". The vague category "as best we can tell right now, pending future information" is confusing and upsetting. Teachers and parents are also natural authority figures, and don't have the time (and often background) to explain the rationale behind all their instructions. Elementary memorization is useful, and a child which questioned everything would be impossible to teach. But for many students this attitude continues to their study of science, so Newton, and Einstein, and Darwin become unquestioned authority figures. They don't go back to see that these scientist's work consisted of the same type of presentation discussed above: theories with supporting evidence.

Critics of "Darwinism" are often actually modeling their critique on their early childhood religious upbringing. Early authority figures in their life (their parents) presented additional authority figures (church leaders) whose authority is ultimately derived from an unquestionable final authority figure (the religion's founder(s)). The founders words are contained in unquestionable texts (the revealed sacred documents), and all necessary truths can be obtained through detailed study of those texts. This technique of finding truths through interpretation of texts written by founders (and later texts written about texts written by founders) can arguably be useful in some fields such as law. But it is of only historical interest in science. The correctness of modern evolutionary theory is not bound up in the life of Darwin or the text of "The Origin of Species."

An amusing collision between intelligent design "science" and argument from religious authority is here. Dembski is at risk of being "Expelled"...

Monday, November 1, 2010

OpenCL GPGPU real-time ray tracing

OpenCL is the new open standard for GPGPU (general-purpose computing on graphics processing units) programming. It supports both graphics cards and multi-core processors with the same C99-based source code. I've re-written the NVIDIA CUDA version of my real-time ray tracer using OpenCL, and run it on an ATI/AMD Radeon HD 5870 GPU. For comparison I also ran it in CPU mode on the Intel i7-980X (6 core, hyperthreaded) processor.



The OpenCL CPU version provided similar results to the pthreads-generic version tested earlier. With 10 objects in the scene the GPU performance was only slightly better than the CPU version. But as the number of objects increased the GPU version maintained performance much better than the CPU version. This indicates that with low object counts the actual work done by each GPU core is small relative to the setup and copying results back overheads. Yes, a realistic ray tracer would be culling the scene graph so fewer objects would be evaluated for each point. And the GPU version ran out of constant memory above 800 objects. But this is much better scaling and performance than the NVIDIA CUDA test I did a few years ago.

Sunday, October 24, 2010

Hyper-Threading

I've re-run my real-time ray tracing and Go ACO TSP programs on a 6 core processor with hyper-threading. The Intel i7-980X supports two hardware threads per core, so the OS sees it as a 12-core processor.



Here hyper-threading has improved performance for both programs in all configurations. At one core both programs were sped up 28%. At 6 cores ray tracing was sped up 9% versus non-HT, while ACO TSP was sped up 18% versus non-HT. The higher speedup for ACO TSP is expected as it matches the general speedup from increased cores.

Both programs were run with 24 threads in all test configurations. Having more program threads than cores*hardware_threads_per_core is key. Otherwise the OS may schedule multiple threads on the same core while leaving others idle. Linux is hyper-theading aware and attempts to avoid this scheduling problem, but doesn't always get it right. In testing with 6 threads and 6 cores, I saw 28% decreases in performance for both programs when hyper-threading was enabled.

The Intel i7-980X also supports "Turbo Boost", where one core is automatically overclocked when other cores are idle. Turbo boost was disabled for the tests above. With one core and HT disabled, Turbo boost provides a speedup of 8% for both programs. Surprisingly, with six cores and HT enabled (so all cores should be fully utilized) Turbo boost provides a speedup of 5% for both programs.

The performance of Go has also improved relative to the single-threaded C version of ACO TSP. For 24 ants the C version finishes in 60 seconds, so on one core Go is only 1.7x slower than C. At 2 cores Go has exceeded the performance of the single-threaded C version. Previously Go was 2.8x slower than C on one core; I don't know whether this improvement is due to an improved Go compiler, the move to 64-bit compilers and OS, or architectural differences between the Intel Core 2 and i7-980X.

Tuesday, August 24, 2010

The Root Problem

What do homes, college tuition, and health care have in common? They have all had massive price increases in recent years. These increases were not funded by increased personal income. Instead they were funded by a combination of increased risky borrowing capacity, and redistributing the costs through government programs or private insurance. In the case of homes there is a current explicit debate about whether government price supports (like farm price supports!) would be a good thing. In the case of tuition and health care the "price support" effect is less understood. For all three there is little recognition that government subsidies are not free, partly due to current magical thinking around deficit spending (as discussed earlier). And because of this there is very slow movement toward the necessary final state: one in which prices are lower, so that individuals can make payments out of regular income, without the need for risky loans or deficit-funded government subsidies.

Wednesday, July 14, 2010

Brin on Rand's Essentialism

"We do need a brief aside about Objectivism, which begins by proposing that reality exists independent of its perception. This contrasts refreshingly against the subjective-relativism offered by today's fashionable neo-leftist philosophers, who claim (in blithe and total ignorance of science) that "truth" can always be textually redefined by any observer - a truly pitiable, easily-disproved, and essentially impotent way of looking at the world.

So far, so good. Unfortunately, any fledgling alliance between Rand's doctrine and actual science breaks down soon after that. For she further holds that objective reality is readily accessible by solitary individuals using words and logic alone. This proposition - rejected by nearly all modern scientists - is essentially a restatement of the Platonic World-view, a fundamental axiom of which is that the universe is made up of ideal essences or "values" (the term Rand preferred) that can be discovered, dispassionately examined, and objectively analyzed by those few bold minds who are able to finally free themselves from the hoary assumptions of the past. Once freed, any truly rational individual must, by simply applying verbal reasoning, independently reach the same set of fundamental conclusions about life, justice, and the universe. (Naturally any mind that fails to do so must, by definition, not yet be free.)"

"The Art of Fiction, by Ayn Rand", a review by David Brin in his Through Stranger Eyes


Sunday, May 23, 2010

The Appeal of Fantasy Fiction

I've been thinking about the continued appeal of fantasy fiction, especially in relation to topics I have discussed before such as the Sapir-Whorf Hypothesis, Moral Realism, and Essentialism. Science fiction author David Brin has written several essays critical of fantasy fiction, including J.R.R. Tolkien -- enemy of progress. His basic theme is:
It's only been 200 years or so -- an eye blink -- that "scientific enlightenment" began waging its rebellion against the nearly universal pattern called feudalism, a hierarchic system that ruled our ancestors in every culture that developed both metallurgy and agriculture. Wherever human beings acquired both plows and swords, gangs of large men picked up the latter and took other men's women and wheat... They then proceeded to announce rules and "traditions" ensuring that their sons would inherit everything.
Brin explains how these actions are ennobled in the literature passed down to us, such as the Iliad and Odyssey, the Bible, Arthurian legends, etc. I've been thinking about how these values were embedded at a more basic level, in the language we still use. Look at the term I used earlier: ennobled. The first online dictionary definition gives:
1. to make noble, honourable, or excellent; dignify; exalt
2. to raise to a noble rank; confer a title of nobility upon

[Middle English *ennoblen, from Old French ennoblir : en-, causative pref.; see en-1 + noble, noble; see noble.]

http://www.thefreedictionary.com/ennobled
Here, prior to the emergence of modern English, the concept of honorable behavior was bound up with the concept of the superiority of certain blood-lines of inherited privilege. If you accept some form of the Sapir-Whorf Hypothesis, that language influences thought, then the use of the English language itself provides a subconscious favorable disposition to the idea of inherited privilege. Critiques of the past or present nobility will inherently generate a certain amount of cognitive dissonance. Also note that the idea of noble blood is "essentialist", and fits with humans natural predisposition towards essentialist thinking.

My hypothesis is that feudal and essentialist thinking are embedded throughout all languages, including English. We aren't living in a feudal society, and science is constantly critiquing essentialist thinking. This creates a constant tension in the minds of readers and speakers of English. Fantasy fiction provides a release for this tension. It is a place where the language fits with the action. Often a magical land where essentialist expectations are reality.

Fantasy fiction is in turn divided into two broad categories: High Fantasy, and Swords and Sorcery. High fantasy, of which Tolkien's The Lord of the Rings is the prime example, is centered on an epic struggle between Good and Evil. Swords and Sorcery (think earlier pulp fiction, such as Howard's Conan the Barbarian) is more morally ambiguous. However essentialist expectations, such as the reality of magic and the supernatural, are still met.

Most written Science Fiction today is closer to Swords and Sorcery in terms of moral ambiguity. "Hard" science fiction follows modern science in rejecting essentialism. While the "what ifs" of this type of fiction satisfy some readers, many others yearn for the moral black-and-whites provided by High Fantasy. This demand is satisfied in popular science fiction films, such as Star Wars. Here the heroes turn out to be nobles (like Strider/Aragorn the returning king in LoTR!), matching the essentialist expectations bound up in our language.

Backpacking Gear Test 2010



In last year's Backpacking Gear Test I achieved a base weight (without food, water, or fuel) of 23 lbs and a total pack weight of 30 lbs. This year I have replaced various items, and gotten down to a base weight of 16 lbs and a total pack weight of 22 lbs : a reduction of 8 lbs, or over 25%.

Gear Weights (spreadsheet)

I took all the 2010 listed gear to Castle Rock State Park. I hiked 2.6 miles (each way) with full pack to Castle Rock Trail Camp, and stayed overnight.

Granite Gear Escape A.C. 60 Pack





The Escape is Granite Gear's newest ultralight pack. Compared to last year's North Face El Lobo 65, it is over 2 lbs lighter. It has 5 liters less rated capacity (60 versus 65L), but with the rest of this year's compact new gear I actually have more free space than before. It is rated for a maximum load of 35 lbs (versus 70 for the El Lobo), but I've discovered I have zero interest in carrying heavy loads. The Escape only has a plastic frame sheet (versus the internal aluminum X-frame of the El Lobo) and has a smaller hip belt, so the El Lobo would be preferable for heavy loads. Overall I found the Escape with lighter load more comfortable than last year's El Lobo with heavier load.

The Escape omits various features the El Lobo has, including separate sleeping bag compartment, attached padded belt for the detachable lid as a fanny pack (the Escape lid does have belt loops), extra zippered entries, etc. It does also have a hydration compartment (internal pocket for a "camelback" bladder, plus drinking tube ports), but I used the external bottle holsters instead. They are angled so bottles can be accessed while the pack is on. This frees up more internal space, and eliminates opportunities for liquid disasters.

Big Agnes Fly Creek UL1 Tent









This year I've switched to a one-person tent. The Fly Creek UL1 saves over 2 lbs compared to last year's Sierra Designs Vapor Light 2 XL. This tent is smaller : I can only sit upright in the exact enter of the tent, and my head does touch both sides when I do so. My regular length North Face sleeping bag exactly fits in the tent length. There isn't room to place my pack beside me, but since I only carry a short sleeping pad I use my empty pack as leg rest. One nifty feature: a pocket in the mesh roof holds a headlamp perfectly positioned for night reading (see flyless second image).

The tent is free-standing, though it needs 2 stakes at the rear to pull the foot area open. It also came with the same aluminum stakes as last year's Vapor Light 2 (see stake picture with last year's blog entry). This year I decided to give them a try. They worked fine: no bent stakes. And a tip: the titanium wire handle of a folding spork can be placed through the small hole to use as a stake-puller. Don't try to pull them by hand.

Therm-a-Rest ProLite Air Mattress, Small

See image inside tent, above. This doesn't save weight compared to my earlier closed cell foam pad, but it is much more compact when rolled. They do make an even lighter pad, but user comments have had concerns about durability. As stated above I am using my empty pack as leg rest, so a small works fine.

REI Ti Ware Titanium Pot - 0.9 Liter

I picked this up on close-out sale last year; this non-nonstick version has been discontinued. I only use it to boil water for dehydrated meals, so I don't need the nonstick. My stove melted the silicone coating off the fold-out wire handles almost immediately, so I threw the handles away and use the old pot-lifter from my MSR steel pot set instead. This expensive pot does feel cheap (the lid wants to drop into the pot), but at 0.9L it does balance over my stove better the the 2L MSR pot, and most importantly it saves half a pound and a bunch of pack space. Yes, I bought a folding Ti spork, as my old plastic utensils won't fit inside the 0.9L pot. As stated above, the spork wire handle can double as a tent stake puller...

Other Changes

Include headlamp instead of mini-mag-lite, and a lighter first aid kit.

Sunday, April 11, 2010

ECONned

That brings us to a final outcome of this debacle. A radical campaign to reshape popular opinion recognized the seductive potential of the appealing phrase "free markets". Powerful business interests, largely captive regulators and officials, and a lapdog media took up this amorphous, malleable idea and made it a Trojan horse for a three-decade-long campaign to tear down the rules that constrained the finance sector. The result has been a massive transfer of wealth, with its centerpiece the greatest theft from the public purse in history. This campaign has been far too consistent and calculated to brand it with the traditional label, "spin". This manipulation of public perception can only be called propaganda. Only when we, the public, are able to call the underlying realities by their proper names - extortion, capture, looting, propaganda - can we begin to root them out.

Yves Smith, ECONned: How Unenlightened Self Interest Undermined Democracy and Corrupted Capitalism, p. 308.

Sunday, March 14, 2010

Game Time

Pen-and-paper role-playing games (RPGs) began as an expansion of rules for historical tabletop miniatures wargames. Tabletop wargame rules were concerned with creating accurate simulations of historical battles -- including medieval battles, such as Agincourt. Early RPGs modeled "heroes" as having the equivalent strength of several ordinary soldiers, and added rules for fantastic creatures, magic, and advanced technology. But the rules kept realism as the base case, so an ordinary soldier could fight, travel, heal, etc. in a manner approximating real life unless assisted by magic or advanced technology.

Imposing realistic times for travel and healing worked in the pen-and-paper game since all the players typically formed a single "party", and the referee could arbitrarily move the time-line ahead to skip weeks of boring travel or healing. This flexibility matched that which naturally happened in the opposite direction, where a few minutes of simulated combat time often took hours of real time to resolve. This freedom to manipulate the time-line was retained when RPGs were first moved onto computers. Many single player computer RPGs are still turn-based, and allow the player (sometimes controlling an entire party) to stop time while providing orders to each character. Again a few minutes of simulated combat can take longer to resolve (though computerization makes fast resolution much easier). And many games still allow time to be sped up during travel or healing.

Unfortunately this flexibility disappears with massively-multiplayer online RPGs. These provide the same persistent world model for all the players, so the time-line must move forward at the same fixed rate for everyone. A "realistic" mechanic for healing or travel would impose an unacceptable level of boredom on the players. Instead these games allow characters (even without specialized magical or technological assistance) to quickly heal and travel long distances between encounters.

This common mechanic from online RPGs is currently filtering back into the newest revisions of classic paper-and-pencil RPGs. The "old school gamers" are understandably concerned and confused. They don't see the need, since the referee is always free to speed up the time-line. And those from a simulation (especially miniatures) background don't like the loss of realism.

Monday, February 15, 2010

Conflicting World Models in Conflict Simulation

Many board games have been created to simulate past military conflicts. They are often created to explore "what ifs", alternative choices the opposing commanders hypothetically could have made at the time. A basic problem, of course, is that we have hindsight knowledge (though sometimes still incomplete) of what actually happened.

To simulate "fog of war" games sometimes hide the positions or strengths of opposing troops. More rarely the strengths of untested "green" troops are even hidden from their own commanders. Elaborate scenario start conditions also try to recreate the historical limitations faced by commanders at the time: troops hopelessly out of position or delayed due to political restrictions, surprise, disrupted communications, etc.

What these games rarely represent, however, is that commanders sometimes start with fundamentally different understandings of "basic game mechanics", such as the effects of mapboard terrain on movement, supply, and combat. Many historical battles have hinged on one side believing certain terrain was impassible by many unit types. The classic gaming example is the Ardennes forest at the beginning of both WW I and II. A "historically correct" gameboard for the French would show it impassible to mechanized units, while the German gameboard would show it passable. Since the "after the fact" map has the terrain passable, the French player will of course want to defend it accordingly. Thus the elaborate setup-rules necessary to prevent this and ensure at least the possibility of achieving the historical outcome. But here the most important element of surprise, that of one commander suddenly discovering his world-model is incorrect, is lost.

This limitation becomes more acute the further players try to maneuver beyond the historical outcome. Since these alternative paths were (figuratively and literally) never explored, we have little idea where their differing world models would have broken down in conflict with reality.

The Future of Amplification

Recently there have been several positive reviews of the NAD M2 Direct Digital Amplifier (white paper). It is the first of its kind, and I expect many more amplifiers like this will follow.

Today most audio sources are digital: CD/SACD, DVD/Blu-Ray, HDTV, PC/MP3 server, etc. These signals are passed through a digital-to-analog converter, have volume control applied, and are passed as analog signals to the amplification stage. Even current class D amplifiers accept an analog signal, convert it to a PWM (pulse width modulation) digital signal, and use that to generate the final amplified analog signal sent to the speakers.

The NAD M2 is unique in keeping the input digital signals in the digital domain as long as possible. It accepts PCM (pulse code modulation) digital inputs, applies volume control digitally, and converts the PCM to PWM for output by the class D amplifier. Keeping the signal digital as long as possible eliminates all the possible noise sources in the redundant digital-to-analog-to-digital conversions, and the lossy analog connections between source, pre-amplifier and power amplifier.

(Yes, I deleted my previous audio post as I realized I was spreading disinformation. I am currently fascinated by the topic, and will keep trying.)

Sunday, January 24, 2010

Cycles of Explanation in Economics

In areas of science such as physics and biology our successive theories developed over time are better and better explanations of the observed facts. Think heliocentrism versus geocentrism, or natural selection versus Lamarckism. But in economics I suspect this is not always the case. Traumatic events like the Great Depression, or (hopefully) the current crisis can force a brief period of clarity. But over time economic forces will result in the mainstream replacement of the theory which most closely matches the observed facts of the previous crisis with a theory which maximizes short term profits for powerful actors. The time-frame over which this shift occurs is probably a generation, in which those with first-hand memories of the previous crisis leave power.

Here are two exhibits for my point. The first is a quote I have been re-sending for years:

"When business in the United States underwent a mild contraction in 1927, the Federal Reserve created more paper reserves in the hope of forestalling any possible bank reserve shortage. More disastrous, however, was the Federal Reserve's attempt to assist Great Britain who had been losing gold to us because the Bank of England refused to allow interest rates to rise when market forces dictated (it was politically unpalatable). The reasoning of the authorities involved was as follows: if the Federal Reserve pumped excessive paper reserves into American banks, interest rates in the United States would fall to a level comparable with those in Great Britain; this would act to stop Britain's gold loss and avoid the political embarrassment of having to raise interest rates.

The "Fed" succeeded; it stopped the gold loss, but it nearly destroyed the economies of the world in the process. The excess credit which the Fed pumped into the economy spilled over into the stock market -- triggering a fantastic speculative boom. Belatedly, Federal Reserve officials attempted to sop up the excess reserves and finally succeeded in braking the boom. But it was too late: by 1929 the speculative imbalances had become so overwhelming that the attempt precipitated a sharp retrenching and a consequent demoralizing of business confidence. As a result, the American economy collapsed. Great Britain fared even worse, and rather than absorb the full consequences of her previous folly, she abandoned the gold standard completely in 1931, tearing asunder what remained of the fabric of confidence and inducing a world-wide series of bank failures. The world economies plunged into the Great Depression of the 1930's."


Gold and Economic Freedom, Alan Greenspan, 1967.

Here the Greenspan of 1967 believed holding Fed rates too low caused a bubble that could not be cleaned up afterwards. But the Greenspan of the late 1990s and again early 2000s no longer believed low rates caused bubbles, or that a bubble could be too large for the Fed to clean up afterwards. Why the shift? I think the earlier theory fits the facts of the 1920-30s (and our current situation now) better. But note his successor Ben Bernanke still denies low rates cause bubbles.

A second exhibit is the presentation I wrote in 2006 and posted as Black Swans in the Market in 2008. I showed that any Statistics 101 student could easily demonstrate that daily stock prices changes are not normally distributed. This makes people like the CFO of Goldman Sachs, who claimed in 2007 "We were seeing things that were 25-standard-deviation events, several days in a row" look extra-silly. Why were they using such bad models? Is finance in general so ignorant of basic statistics[*]? No, they are simply choosing models which maximize their short-term profits -- and bonuses.

[*] OTOH, a Google Books search on "which shows a histogram of the daily returns on Microsoft stock" shows that as of 2005 Brealey et. al. were still peddling this nonsense to unsuspecting MBA students in their Principles of Corporate Finance and related texts. A 2005 edition apparently shifted the MSFT date range from 1986-97 to 1990-2001 (omitting the crash of 1987!). Later editions don't hit on the search terms; no idea if they have cleaned up their act.

Sunday, January 3, 2010

Multi-Core Ant Colony Optimization for TSP in Go

Go is a new statically typed, garbage collected, concurrent programming language. It is native compiled and offers near-C performance. Like Erlang it uses a CSP model of lightweight processes ("goroutines") communicating via message passing. Messages can be synchronous (unbuffered) or asynchronous. Go supports multi-core processors.

On my usual ACO TSP test case Go is the best performing language so far. On a single core it was 2.8x slower than C, beating the previous best Shedskin Python. Go's speedup from 1 to 2 cores was 1.9, matching previous best Erlang. With 4 cores Go exceeds C's single core performance -- the first tested language to achieve this goal.



Go's lightweight processes are scheduled onto OS threads by the Go runtime system. This allows the system to support many more processes. I was able to run ACO TSP with 100,000 ant processes. The runtime system scheduler appears to treat these processes somewhat like futures, lazily scheduling them when another process is blocking on receipt of a message from a shared channel. This is inefficient for the type of parallel processing used in ACO (many seconds of parallel computation ending with a single message send), as only a single process (and thus a single core) is active at a time. This is a known issue, and the simple solution suggested on the golang-nuts mailing list is to add runtime.Gosched() calls to yield the processor. For my case this was insufficient, and adding additional heartbeat messages to each process helped force maximal multi-core usage.

See here for code and more details.