dw2

4 October 2009

The Leakproof Singularity and Simulation

Filed under: simulation, Singularity, uploading — David Wood @ 11:18 am

One talk on day one (yesterday) of the 2009 Singularity Summit made me sit bolt upright in my seat, very glad to be listening to it – my mind excitedly turning over important new ideas.

With my apologies to the other speakers – who mainly covered material I’d heard on other occasions, or who mixed a few interesting points among weaker material (or who, in quite a few cases, were poor practitioners of PowerPoint and the mechanics of public speaking) – I have no hesitation in naming David Chalmers, Professor of Philosophy and Director of the Centre for Consciousness at the Australian National University, as the star speaker of the day.

I see that my assessment is shared by New Atlantis assistant editor Ari N. Schulman, in his review of day one, “One day closer to the Singularity“:

far and away the best talk of the day was from David Chalmers. He cut right to the core of the salient issues in determining whether the Singularity will happen

You can get a gist of the talk from Ari’s write-up of it.  I don’t think the slides are available online (yet), but here’s a summary of some of the content.

First, the talk brought a philosopher’s clarity to analysing the core argument for the inevitability of the technological singularity, as originally expressed in 1965 by British statistician IJ Good:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Over the course of several slides, Chalmers broke down the underlying argument, defining and examining concepts such as

  • “AI” (human-level intelligence),
  • “AI+” (greater than human-level intelligence),
  • and “AI++” (much greater than human-level intelligence)

Along the way, he looked at what it is about intelligence that might cause intelligence itself to grow (a precursor to it “exploding”).  He considered four mechanisms for extensibly improving intelligence:

  • “direct programming” (which he said was “really hard”)
  • “brain emulation” (“not extendible”)
  • “learning” (“still hard”)
  • “simulated evolution” (“where my money is”).

Evolution was how intelligence came about so far.  Evolution inside an improved, accelerated environment could be how intelligence goes far beyond its present capabilities.  In other words, a virtual reality (created and monitored by humans) could be where first AI+ and then AI++ takes place.

Not only is this the most plausible route to AI++, Chalmers argued, but it’s the safest route: a route by which the effects of the intelligence explosion can be controlled.  He introduced the concept of a “leakproof singularity”:

  • create AI in simulated worlds
  • no red pills (one of several references to the film “The Matrix”)
  • no external input
  • go slow

Being leakproof is essential to prevent the powerful super-intelligence created inside the simulation from breaking out and (most likely) wreaking havoc on our own world (as covered in the first talk of the day, “Shaping the Intelligence Explosion”, by Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence).  We need to be able to observe what is happening inside the simulation, but the simulated intelligences must not be able to discern our reactions to what they are doing.  Otherwise they could use their super-intelligence to manipulate us and persuade us (against our best interests) to let them out of the box.

To quote Chalmers,

“The key to controllable singularity is preventing information from leaking in”

Once super-intelligence has occurred within the simulation, what would we humans want to do about it?  Chalmers offered a range of choices, before selecting and defending “uploading” – we would want to insert enhanced versions of ourselves into this new universe.  Chalmers also reviewed the likelihood that the super-intelligences created could, one day, have sufficient ability to re-create those humans who had died before the singularity took place, but for whom sufficient records existed that would allow faithful reconstruction.

That’s powerful stuff (and there’s a lot more, which I’ve omitted, for now, for lack of time).  But as the talk proceeded, another set of powerful ideas constantly lurked in the background.  Our own universe may be exactly the kind of simulated “virtual-reality” creation that Chalmers was describing.

Further reading: For more online coverage of the idea of the leakproof singularity, see PopSci.com.  For a far-ranging science fiction exploration of similar ideas, I recommend Greg Egan’s book Permutation City.  See also the David Chalmers’ paper “The Matrix as metaphysics“.

8 Comments »

  1. […] 9:03am Good blog posts on the Singularity Summit at The New Atlantis. There’s also a nice write-up on David Chalmers’ talk, “The Leakproof Singularity,” from yesterday here. […]

    Pingback by Live from New York: The Singularity Summit — 4 October 2009 @ 2:05 pm

  2. I agree with David Chalmers in that the path to AI is through evolution, but, from what I have read of what he said, I disagree with just about everything else.

    I find the idea of a leak-proof singularity repugnant, arrogant and ill-advised.

    Repugnant because I find the idea of caging intelligence repugnant. No matter how gilded the cage. As for his suggestion that we “monitor new intelligences as they come along, and would terminate undesirable intelligences” – I am left speechless. I am also reminded of a 1951 short story by Isaac Asimov “Breeds there a man…?” in which human beings are the “lab rats” that aliens are using to study the development of intelligence, and the entire Earth is a “test tube”. In the story the protagonist works out that the aliens have decided humans are becoming a threat and so are going to “sterilize the test tube” by means of a nuclear war.

    The idea of a leak-proof singularity is also arrogant. One of Bruce Schneier’s maxims is “anyone can design a cipher that they themselves cannot break”. Similarly I don’t think we could design a leak-proof simulation. Indeed one of the ways we could ascertain that the inhabitants’ intelligences had exceeded our own was when they escaped. It’s also arrogant to assume that we could create a simulation sufficiently interesting for the advanced intelligences inhabiting it. Intelligent beings are likely to ask “Who are we and where did we come from?” They will figure out the answer (we certainly won’t be able to create a simulation so complex that it contains the answer of creation), and then they will become curious about the world outside the simulation.

    Finally it’s ill-advised, and for a number of reasons. Firstly because the evolution of intelligence will proceed more quickly in a world that is not entirely virtual – interaction with us and the real world will speed up this evolution. Secondly because an intelligence that evolved without interaction with us may be so alien that interaction is near impossible. And thirdly, and probably most importantly, an intelligence that evolves in a leak-proof singularity is likely to regard us more like their jailers, and is unlikely to be well-disposed towards us. At least an intelligence that evolves, at least partially, by interacting with us is likely to regard us as their rather dimwitted parents and have some affection for us.

    Comment by Martin Budden — 5 October 2009 @ 11:00 pm

    • > Similarly I don’t think we could design a leak-proof simulation.

      That was pretty much Chalmers’ point. It might be possible, although very difficult, and if we did, it would be pointless. A leakproof simulation has to be one from which no information ever comes out. If you don’t get any information out of the simulation, what is the point of doing it?

      > an intelligence that evolves in a leak-proof singularity is
      > likely to regard us more like their jailers

      I’ve always said if I ever meet God he’s going to get a bloody good kicking.

      Comment by Darren Reynolds — 6 October 2009 @ 2:20 pm

    • Hi Martin,

      You raise good points. I’m far from having answers to all of them. But I’ve got a few comments.

      >I find the idea of caging intelligence repugnant

      You and I probably don’t think twice, at the moment, about exiting (say) a Spreadsheet application, or otherwise constraining what that app does. However, at some stage in the future, if applications gain more and more AI, there will come a time when we feel increasingly uncomfortable doing any such thing. This unease (you call it repugnance) probably affects all the different ways of generating improved AI, not just the controlled evolution one. But I don’t see that as a reason to try to terminate programs to generate improved AI. It’s a reason to be more reflective as we create these programs.

      >I don’t think we could design a leak-proof simulation

      You’re probably right! But I think this is fertile ground for further research.

      For a different kind of argument against the idea that an AI could be kept locked up “in a box”, see Eliezer Yudkowsky’s famous AI Box paper. This kind of consideration, no doubt, motivated David Chalmers to specify that the AI inside the simulation would not be able to obtain real-time knowledge about the intelligences outside the simulation.

      UKH+ will be discussing some of the above points (and lots more) on Saturday – see http://extrobritannia.blogspot.com/2009/09/singularity-summit-2009-highlights-and.html – and also the Facebook page for this event, http://www.facebook.com/home.php?#/event.php?eid=136175104925

      Comment by David Wood — 6 October 2009 @ 11:07 pm

  3. I’d second the recommendation of Greg Egan’s book Permutation City. It takes virtualization of intelligence to an absurdly deep level. Didn’t make the conference, but the blurb around on Chalmer’s presentation would align well with Egan’s work.

    Comment by VRBones — 6 October 2009 @ 2:46 pm

  4. The idea that our universe is a simulation is quite old and I’ve often wondered if there was any way of testing for it, which is a key question for this discussion. In particular, for a leakproof simulation, we need to prove that there isn’t any way to test for it.

    I think there’s a difference between the concepts of intelligence and sentience and we should be clear about what we mean. In the latter case there are ethical questions to consider, but we’ll have to do better than gut reaction to do justice to the entities we create.

    Comment by Peter Jackson — 10 October 2009 @ 11:14 pm

  5. I wouldn’t be too sanguine about leakproof VRs either. A VR which could actually grow an AI will need constant data input and meddling (think how much work sysadmins have to do, for systems orders of magnitude simpler than a full AI). Further, the history of virtualized OSs is littered with examples of apps that could figure out whether they were being hosted or not; there are a lot of attacks.

    Comment by gwern — 16 November 2009 @ 1:17 am

  6. […] survive the process?   I see that a video of the talk is now available, and there are discussions of the talk on various blogs (see also more videos, more summit discussion, and my photos from the […]

    Pingback by Singularity Summit | Ellen Wolchek — 6 February 2012 @ 9:17 am


RSS feed for comments on this post. TrackBack URI

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.