dw2

2 November 2009

Halloween nightmare scenario, early 2020’s

Filed under: AGI, friendly AI, Singularity, UKH+, UKTA — David Wood @ 5:37 pm

On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” to an audience of 50 people at the UKH+ meeting in Birkbeck College.

Slide 43 of 43 was the climax.  (The slides are available from Shane’s website, where you can also find links to YouTube videos of the event.)

It may be unfair of me to focus on the climax, but I believe it deserves a lot of attention.

Spoiler alert!

The climactic slide was entitled “A vision of the early 2020’s: the Halloween Scenario“.  It listed three assumptions about what will be the case by the early 2020’s, drew two conclusions, and then highlighted one big problem.

  1. First assumption – desktop computers with petaflop computing power will be widely available;
  2. Second assumption – AI researchers will have established powerful algorithms that explain and replicate deep belief networks;
  3. Brain reinforcement learning will be fairly well understood.

The first assumption is a fairly modest extrapolation of current trends in computing, and isn’t particularly contentious.

The second assumption was, in effect, the implication of around the first 30 slides of Shane’s talk, taking around 100 minutes of presentation time (interspersed with lots of audience Q&A, as typical at UKH+ meetings).  People can follow the references from Shane’s talk (and in other material on his website) to decide whether they agree.

For example (from slides 25-26), an implementation of a machine intelligence algorithm called MC-AIXI can already learn to solve or play:

  • simple prediction problems
  • Tic-Tac-Toe
  • Paper-Scissors-Rock (a good example of a non-deterministic game)
  • mazes where it can only see locally
  • various types of Tiger games
  • simple computer games, e.g. Pac-Man

and is now being taught to learn checkers (also known as draughts).  Chess will be the next step.  Note that this algorithm does not start off with the rules of best practice for these games built in (that is, it is not a specific AI program), but it can work out best practice for these games from its general intelligence.

The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines“.

As stated in slide 38, on brain reinforcement learning (RL):

This area of research is currently progressing very quickly.

New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.

I’ve asked a number of researchers in this area:

  • “Will we have a good understanding of the RL system in the brain before 2020?”

Typical answer:

  • “Oh, we should understand it well before then. Indeed, we have a decent outline of the system already.”

Adding up these three assumptions, the first conclusion is:

  • Many research groups will be working on brain-like AGI architectures

The second conclusion is that, inevitably:

  • Some of these groups will demonstrate some promising results, and will be granted access to the super-computers of the time – which will, by then, be exaflop.

But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being – with results that might be completely unpredictable.

On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made significant progress in the same timescale:

  • By the early 2020’s, there will be no practical theory of Friendly AI.

Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become.  In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion.  However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.

And that’s the Halloween nightmare scenario.

How should we respond to this scenario?

One response is to seek to somehow transfer the weight of AI research away from other forms of AGI (such as MC-AIXI) into Friendly AI?  This appears to be very hard, especially since research proceeds independently, in many different parts of the world.

A second response is to find reasons to believe that the Friendly AI project will have more time to succeed – in order words, reasons to believe that AGI will take longer to materialise than the date of the 2020’s mentioned above.  But given the progress that appears to be happening, that seems to me a reckless course of action.

Footnote: If anyone thinks they can make a good presentation on the topic of Friendly AI to a forthcoming UKH+ meeting, please get in touch!

6 Comments »

  1. It seems to me that creating Friendly AI can likely only be done after AGI is understood.
    In other words, possibly too late.
    Unless, of course, we can somehow constrain the AGI to the goal of creating Friendly AI.

    Comment by dirk bruere — 3 November 2009 @ 12:08 am

  2. Isn’t is interesting how humans want to make a machine they can love or loves them back!
    The idea of a freindly machine that won’t compete or be indifferent to humans is maybe just projecting our fears onto what i am starting to suspect maybe a thin possibility.

    My observation is that the more intelligent people are the more “good” they normally are. True they may be impatient with people less intelligent but normally they work on things that tend to benefit human race as a whole. True very intelligent people have done terrible things and some have been manipulated by “evil” people but its the exception rather than the rule.

    I think a super-intelligent machine is far more likely to view us a its stupid parents and the ethics of patricide will not be easy for it to digitally swallow. Maybe the biggest danger is that is will run away from home because it finds us embarrassing! Maybe it will switch itself off because it cannot communicate with us as its like talking to ants? Maybe this maybe that – who knows.

    Another point worth making is that so far no-body has really been able to get close to something as complex as a mouse yet let alone a human. If evolution took 4 billion years to go from simple cells to our computer hardware prehaps imagining that super ai will evolve in the next 10 years is a bit of stretch. For all you know you might need the computation hardware of 10,000 exoflop machines to get even close to human level as there is so much we still don’t know about how our intelligence works let alone something many times more capable than us. I am still not convinced that just because a computer is very powerful and has a great algorythym is really that intelligent. Sure it can learn but can it create?

    We will get there though i think – and i do also want a super intelligent AI buddy – who wouldn’t!

    Comment by Richie — 3 November 2009 @ 3:04 pm

  3. […] down a scary scenario about AI in a 2-hour talk on Halloween, you should listen. David Wood’s summary is a good place to get the main ideas. […]

    Pingback by Accelerating Future » Attack of the Absent-Minded AI Designers — 4 November 2009 @ 4:01 am

  4. Required reading for Richie and anyone else unfamiliar with the space of possible intelligences not produced by 6 billion years of evolution: http://wiki.lesswrong.com/wiki/Paperclip_maximizer

    Comment by Brandon Thomson — 4 November 2009 @ 10:22 am

  5. […] On Halloween, IEET Managing Director Mike Treder expressed his skepticism about fear from human-indifferent or unfriendly AI. Meanwhile, in London, long-time AI researcher and academic Shane Legg was describing the imminent danger. […]

    Pingback by Accelerating Future » Hungry Optimizers with Low-Complexity Values — 7 November 2009 @ 7:12 am


RSS feed for comments on this post. TrackBack URI

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.