dw2

3 October 2009

Shaping the intelligence explosion

Filed under: Singularity — David Wood @ 2:38 pm

Here’s a quick summary of the central messages from Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence, in her talk “Shaping the intelligence explosion”.  She makes four key claims.  Note: these claims don’t depend on specific details of technology.

1. Intelligence can radically transform the world

Intelligence is like leverage.  Think of Archimedes moving the whole world, with a sufficiently large lever.

The smarter a given agent is, the more scope it has to find ways to achieve its goal.

Imagine intelligence that is further beyond human-scale intelligence, similar to the way human-scale  intelligence is beyond that of a goldfih.  (“A goldfish at the opera” is a metaphor for minds that can’t understand certain things.) How much more change could be achieved, with that intelligence?

Quote from Michael Vassar: “Humans are the stupidest a system could be, and still be generally intelligent”.

2. An intelligence explosion may be sudden

Different processes have different characteristic timescales.  Many things can happen on timescales very much faster than humans are used to.

Human culture has already brought about considerable change acceleration compared to biological evolution.

Silicon emulations of human brains could happen much faster than the human brain itself.  Adding extra hardware could multiply the effect again.  “Instant intelligence: just add hardware”.

The key question is: how fast could super-human AI arise?  It depends on aspects of how the brain emulation will work.  But we can’t rule out the possibility that a time will arise of a super-fast evolution of super-human AIs.

3. An uncontrolled intelligence explosion would kill us, and destroy practically everything we care about

Smart agencies can rearrange the environment to meet their goals.

Most rearrangements of the human environment would kill us.

Would super AIs want to keep humans around – eg as trading partners?  Alas, we are probably not the most useful trading partners that an AI could achieve by rerranging our parts!

Values can vary starkly across species.  Dung beetles think dung is yukky.

AIs are likely to want to scrap our “spaghetti code” of culture and create something much better in its place.

4. A controlled intelligence explosion could save us.  It’s difficult, but it’s worth the effort.

A well engineered AI could be permanently stable in its goals.

Be careful what you build an optimizer for – remember the parable of the sorcerer’s apprentice.

If we get the design right, the intelligence explosion could aid human goals instead of destroying them.

It’s a tall order!

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

%d bloggers like this: