On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” to an audience of 50 people at the UKH+ meeting in Birkbeck College.
Slide 43 of 43 was the climax. (The slides are available from Shane’s website, where you can also find links to YouTube videos of the event.)
It may be unfair of me to focus on the climax, but I believe it deserves a lot of attention.
Spoiler alert!
The climactic slide was entitled “A vision of the early 2020’s: the Halloween Scenario“. It listed three assumptions about what will be the case by the early 2020’s, drew two conclusions, and then highlighted one big problem.
- First assumption – desktop computers with petaflop computing power will be widely available;
- Second assumption – AI researchers will have established powerful algorithms that explain and replicate deep belief networks;
- Brain reinforcement learning will be fairly well understood.
The first assumption is a fairly modest extrapolation of current trends in computing, and isn’t particularly contentious.
The second assumption was, in effect, the implication of around the first 30 slides of Shane’s talk, taking around 100 minutes of presentation time (interspersed with lots of audience Q&A, as typical at UKH+ meetings). People can follow the references from Shane’s talk (and in other material on his website) to decide whether they agree.
For example (from slides 25-26), an implementation of a machine intelligence algorithm called MC-AIXI can already learn to solve or play:
- simple prediction problems
- Tic-Tac-Toe
- Paper-Scissors-Rock (a good example of a non-deterministic game)
- mazes where it can only see locally
- various types of Tiger games
- simple computer games, e.g. Pac-Man
and is now being taught to learn checkers (also known as draughts). Chess will be the next step. Note that this algorithm does not start off with the rules of best practice for these games built in (that is, it is not a specific AI program), but it can work out best practice for these games from its general intelligence.
The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines“.
As stated in slide 38, on brain reinforcement learning (RL):
This area of research is currently progressing very quickly.
New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.
I’ve asked a number of researchers in this area:
- “Will we have a good understanding of the RL system in the brain before 2020?”
Typical answer:
- “Oh, we should understand it well before then. Indeed, we have a decent outline of the system already.”
Adding up these three assumptions, the first conclusion is:
- Many research groups will be working on brain-like AGI architectures
The second conclusion is that, inevitably:
- Some of these groups will demonstrate some promising results, and will be granted access to the super-computers of the time – which will, by then, be exaflop.
But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being – with results that might be completely unpredictable.
On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made significant progress in the same timescale:
- By the early 2020’s, there will be no practical theory of Friendly AI.
Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become. In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion. However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.
And that’s the Halloween nightmare scenario.
How should we respond to this scenario?
One response is to seek to somehow transfer the weight of AI research away from other forms of AGI (such as MC-AIXI) into Friendly AI? This appears to be very hard, especially since research proceeds independently, in many different parts of the world.
A second response is to find reasons to believe that the Friendly AI project will have more time to succeed – in order words, reasons to believe that AGI will take longer to materialise than the date of the 2020’s mentioned above. But given the progress that appears to be happening, that seems to me a reckless course of action.
Footnote: If anyone thinks they can make a good presentation on the topic of Friendly AI to a forthcoming UKH+ meeting, please get in touch!





