dw2

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

2 February 2010

Cutting edge computing science research for the real-world

Filed under: architecture, computer science, Google, universities — David Wood @ 11:54 pm

It is an amazing time in computer science.

This is a field that, while it is about 50 years old, has more opportunity today than it has ever had, by a large factor.

These were among the opening statements made by Google’s VP of Research & Special Initiatives, Alfred Z. Spector, in a colloquium a couple of weeks ago to the Computer Science & Engineering faculty of the University of Washington.  A video of the presentation is available from the University of Washington CSE website.

I previously mentioned this video at the tail end of a previous blogpost, “In praise of hybrid AI“.  The video is full of interesting comments about the direction of computer science.

As context, Spector mentioned “four application areas in flux today“:

  • publishing, education, healthcare, and government.

He also mentioned three “systems areas evolving“:

  • ubiquitous high performance networking, distributed computing, and new end-user devices.

This provided a prelude to “three truly big results brewing“:

  1. Totally transparent processing
  2. Ideal distributed computing
  3. Hybrid, not Artificial Intelligence

It’s worth highlighting some points about each of these “big results”.  In all cases, Google seek to follow a quantitative approach, looking at large sets of data, and checking results as systems are incrementally changed.  As Spector said, “more data is better…”

1. Totally transparent processing

Spector spelt out a vision encompassing each of four dimensions: processing should be “effectively transparent”:

  • Across all types of end-user access devices,
  • across all human languages (both formal and informal),
  • across all the modes of  information (eg text, images, audio, video, sensor data, maps, timelines),
  • and across every body of knowledge (both online and offline).

In this vision:

  • There should be “no dependence or occlusions because something has got in the way” or is in the wrong format;
  • There should be “fluidity across all these forms”.

Some subsets of this grand vision include solving “voice to text”, “image recognition”, “find similar images”, and “language translation”.  Spector claimed that progress was being made across many of these sub-problems.

2. Ideal distributed computing

Spector pointed out that

Distributed computing is 30 years old, but, not very deeply understood until recently;

There was a limitation of understanding of (truly) large-scale, open integrated distributed systems.

Particular aspects of distributed systems that had not been deeply understood included:

  • Requirements for systems in which the application needs (and APIs) are not known in advance;
  • Systems with 10^6 or even 10^7 processes, with consequent enormous complexity.

Spector claimed that – as in the case of transparent processing – “there has been lots of incremental progress done with distributed systems, picking away at problem areas”.

Improvements that can be expected for huge distributed systems of computers, arising from computer science research, include:

  • Online system optimisation;
  • Data checking – verifying consistency and validating data/config files;
  • Dynamic repair – eg find the closest feasible solution after an incident (computer broke down);
  • Better efficiency in energy usage of these systems;
  • Improvements in managing security and privacy.

3. Hybrid, not Artificial Intelligence

Hybrid intelligence is like an extension of distributed computing: people become part of the system that works out the answers.

Spector said that Google’s approach was:

To see if some problem can be solved by people and computers working together.

As a familiar example, Search doesn’t try to offer the user only the one best result.  It provides a set of results, and relies on the user picking answers from the list generated by the computer.

Hybrid intelligence can be contrasted with AI (artificial intelligence):

  • AI aims at creating computers as capable as people, often in very broad problem domains.  While progress has been made, this has turned out to be very challenging;
  • Instead, it has proven more useful for computers to extend the capability of people, not in isolation, and to focus on more specific problem areas.

Computer systems can learn from feedback from users, with powerful virtuous circles.  Spector said that aggregation of user responses has proven extremely valuable in learning, such as:

  • feedback in ranking of results, or in prioritising spelling correction options;
  • semi-supervised image content analysis / speech recognition / etc.

(This idea has evolved over time, and was previously known as “The combination hypothesis”: computers would become smarter if different methods of learning can be combined.  See for example the 2003 article “IBM aims to get smart about AI” from a time when Spector worked at IBM.  It’s good to see this idea bearing more and more fruit.)

Selected questions and answers

A couple of the questions raised by the audience at the end of the lecture were particularly interesting.

One questioner asked if Google’s guidelines for research projects specified any “no-go areas” that should be avoided.  Spector answered:

No one wants a creepy computer.  So the rule is … “don’t be creepy”.

(Which is an unusual twist on “don’t be evil”!)

Spelling this out in more detail:

  • Google aim to apply extremely neutral algorithms to ranking and crawling;
  • They want systems that are very responsive to users’ needs, without being in any way creepy;
  • Views on “what is creepy” may change over time (and may be different in different parts of the world).

A second audience member asked if there are risks to pursuing a quantitative, evolutionary approach to computer science problems.  Spector answered:

  • Yes, the research might get stuck in a local maximum;
  • So you can’t do everything “strictly by the numbers”.  But having the numbers available is a great guide.

Ongoing research

As I viewed this video, part of my brain told me that perhaps I should return to an academic life, in the midst of a computer science faculty somewhere in the world.

I share Spector’s conclusion:

It’s a time of unprecedented diversity and fertility in computer science – and amazing challenges abound;

The results from computer science should continue to make the world a better place.

Spector pointed out that key research challenges are published on the Google Research Blog.  Examples he listed included:

  • increasingly fluid partnership between people and computation;
  • fundamental changes in the methods of science;
  • rethinking the database;
  • CS+X, for all X (how Computer Science, CS, can assist and even transform other fields of study, X);
  • computing with ultra-low power (eg just ambient light as a power source).

Stop press: Google’s Focused Research Awards

Coincidentally, I see that Google have today made a new announcement about their support for research in specific areas of computer science, at a small number of universities worldwide.  The four areas of research are:

  • Machine Learning
  • Use of mobile phones as data collection devices for public health and environment monitoring
  • Energy efficiency in computing
  • Privacy.

It looks like “energy efficiency in computing” is receiving the largest amount of funding.  I think that’s probably the right choice!

Blog at WordPress.com.