dw2

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

3 Comments »

  1. i appereciate this picture.

    Comment by yusuf ado — 16 October 2011 @ 3:45 pm

  2. Jaan’s hardware overhang is not a new factor in the estimate of the critical dates in our march towards the singularity. Kurzweil has already factored it in his estimates. There is actually another computer overhang that may delay our march towards the singularity. At best, we are only using 10 percent of the potential computing power of our desktops. This means that there will be less incentive for the PC makers to keep giving us more power for every $1000 dollars that we spend. Also the trend towards smart phones and mobility has drained a significant number of programmers away from developing the software that would use the power of the PC

    Comment by marcalpv — 28 January 2014 @ 3:44 am

  3. […] futurist David Wood points out, an example of a hardware overhang once occurred when thermonuclear bombs were being tested at the […]

    Pingback by Could a Hardware Overhang Lead To An Intelligence Explosion? - 33rd Square — 21 June 2018 @ 12:21 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

%d bloggers like this: