One talk on day one (yesterday) of the 2009 Singularity Summit made me sit bolt upright in my seat, very glad to be listening to it – my mind excitedly turning over important new ideas.
With my apologies to the other speakers – who mainly covered material I’d heard on other occasions, or who mixed a few interesting points among weaker material (or who, in quite a few cases, were poor practitioners of PowerPoint and the mechanics of public speaking) – I have no hesitation in naming David Chalmers, Professor of Philosophy and Director of the Centre for Consciousness at the Australian National University, as the star speaker of the day.
I see that my assessment is shared by New Atlantis assistant editor Ari N. Schulman, in his review of day one, “One day closer to the Singularity“:
far and away the best talk of the day was from David Chalmers. He cut right to the core of the salient issues in determining whether the Singularity will happen
You can get a gist of the talk from Ari’s write-up of it. I don’t think the slides are available online (yet), but here’s a summary of some of the content.
First, the talk brought a philosopher’s clarity to analysing the core argument for the inevitability of the technological singularity, as originally expressed in 1965 by British statistician IJ Good:
“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”
Over the course of several slides, Chalmers broke down the underlying argument, defining and examining concepts such as
- “AI” (human-level intelligence),
- “AI+” (greater than human-level intelligence),
- and “AI++” (much greater than human-level intelligence)
Along the way, he looked at what it is about intelligence that might cause intelligence itself to grow (a precursor to it “exploding”). He considered four mechanisms for extensibly improving intelligence:
- “direct programming” (which he said was “really hard”)
- “brain emulation” (“not extendible”)
- “learning” (“still hard”)
- “simulated evolution” (“where my money is”).
Evolution was how intelligence came about so far. Evolution inside an improved, accelerated environment could be how intelligence goes far beyond its present capabilities. In other words, a virtual reality (created and monitored by humans) could be where first AI+ and then AI++ takes place.
Not only is this the most plausible route to AI++, Chalmers argued, but it’s the safest route: a route by which the effects of the intelligence explosion can be controlled. He introduced the concept of a “leakproof singularity”:
- create AI in simulated worlds
- no red pills (one of several references to the film “The Matrix”)
- no external input
- go slow
Being leakproof is essential to prevent the powerful super-intelligence created inside the simulation from breaking out and (most likely) wreaking havoc on our own world (as covered in the first talk of the day, “Shaping the Intelligence Explosion”, by Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence). We need to be able to observe what is happening inside the simulation, but the simulated intelligences must not be able to discern our reactions to what they are doing. Otherwise they could use their super-intelligence to manipulate us and persuade us (against our best interests) to let them out of the box.
To quote Chalmers,
“The key to controllable singularity is preventing information from leaking in”
Once super-intelligence has occurred within the simulation, what would we humans want to do about it? Chalmers offered a range of choices, before selecting and defending “uploading” – we would want to insert enhanced versions of ourselves into this new universe. Chalmers also reviewed the likelihood that the super-intelligences created could, one day, have sufficient ability to re-create those humans who had died before the singularity took place, but for whom sufficient records existed that would allow faithful reconstruction.
That’s powerful stuff (and there’s a lot more, which I’ve omitted, for now, for lack of time). But as the talk proceeded, another set of powerful ideas constantly lurked in the background. Our own universe may be exactly the kind of simulated “virtual-reality” creation that Chalmers was describing.
Further reading: For more online coverage of the idea of the leakproof singularity, see PopSci.com. For a far-ranging science fiction exploration of similar ideas, I recommend Greg Egan’s book Permutation City. See also the David Chalmers’ paper “The Matrix as metaphysics“.