dw2

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

7 May 2011

Workers beware: the robots are coming

Filed under: books, challenge, disruption, Economics, futurist, robots — David Wood @ 9:07 pm

What’s your reaction to the suggestion that, at some stage in the next 10-30 years, you will lose your job to a robot?

Here, by the word “robot”, I’m using shorthand for “automation” – a mixture of improvements in hardware and software. The suggestion is that automation will continue to improve until it reaches the stage when it is cheaper for your employer to use computers and/or robots to do your job, than it is to continue employing you. This change has happened in the past with all manner of manual and/or repetitive work. Could it happen to you?

People typically have one of three reactions to this suggestion:

  1. “My job is too complex, too difficult, too human-intense, etc, for a robot to be able to do it in the foreseeable future. I don’t need to worry.”
  2. “My present job may indeed be outsourced to robots, but over the same time period, new kinds of job will be created, and I’ll be able to do one of these instead. I don’t need to worry.”
  3. “When the time comes that robots can do all the kinds of work that I can do, better than me, we’ll be living in an economy of plenty. I won’t actually need to work – I’ll be happy to enjoy lots more leisure time. I don’t need to worry.”

Don’t need to worry? Think again. That’s effectively the message in Martin Ford’s 2009 book “The lights in the tunnel“. (If you haven’t heard of that book, perhaps it’s because the title is a touch obscure. After all, who wants to read about “lights in a tunnel”?)

The subtitle gives a better flavour of the content: “Automation, accelerating technology, and the economy of the future“. And right at the top of the front cover, there’s yet another subtitle: “A journey to the economic landscape of the coming decades“. But neither of these subtitles conveys the challenge which the book actually addresses. This is a book that points out real problems with increasing automation:

  • Automation will cause increasing numbers of people to lose their current jobs
  • Accelerating automation will mean that robots can quickly become able to do more jobs – their ability to improve and learn will far outpace that of human workers – so the proportion of people who are unemployed will grow and grow
  • Without proper employment, a large proportion of consumers will be deprived of income, and will therefore lack the spending power which is necessary for the continuing vibrancy of the economy
  • Even as technology improves, the economy will stagnate, with disastrous consequences
  • This is likely to happen long before technologies such as nanotech have reached their full potential – so that any ideas of us existing at that time in an economy of plenty are flawed.

Although the author could have chosen a better title for his book, the contents are well argued, and easy to read. They deserve a much wider hearing.  They underscore the important theme that the process of ongoing technological improvement is far from being an inevitable positive.

There are essentially two core threads to the book:

  • A statement of the problem – this effectively highlights issues with each of the reactions 1-3 listed earlier;
  • Some tentative ideas for a possible solution.

The book looks backwards in history, as well as forwards to the future. For example, it includes interesting short commentaries on both Marx and Keynes. One of the most significant backward glances considers the case of the Luddites – the early 19th century manufacturing workers in the UK who feared that their livelihoods would be displaced by factory automation. Doesn’t history show us that such fears are groundless? Didn’t the Luddites (and their descendants) in due course find new kinds of employment? Didn’t automation create new kinds of work, at the same time as it destroyed some existing kinds of work? And won’t that continue to happen?

Well, it’s a matter of pace.  One of most striking pictures in the book is a rough sketch of the variation over time of the comparative ability of computers and humans to perform routine jobs:

As Martin Ford explains:

I’ve chosen an arbitrary point on the graph to indicate the year 1812. After that year, we can reasonably assume that human capability continued to rise quite steeply until we reach modern times. The steep part of the graph reflects dramatic improvements to our overall living conditions in the world’s more advanced countries:

  • Vastly improved nutrition, public health, and environmental regulations have allowed us to remain relatively free from disease and reach our full biological potential
  • Investment in literacy and in primary and secondary education, as well as access to college and advanced education for some workers, has greatly increased overall capability
  • A generally richer and more varied existence, including easy access to books, media, new technologies and the ability to travel long distances, has probably had a positive impact on our ability to comprehend and deal with complex issues.

A free download of the entire book is available from the author’s website.  I’ll leave it to you to evaluate the author’s arguments for why the two curves in this sketch have the shape that they do.  To my mind, these arguments have a lot of merit.

The point where these two curves cross – potentially a few decades into the future – will represent a new kind of transition point for the economy – perhaps the mother of all economic disruptions.  Yes, there will still be some new jobs created.  Indeed, in a blogpost last year, “Accelerating automation and the future of work“, I listed 20 new occupations that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

However, the lifetimes of these jobs (before they too can be handled by improved robots) will shrink and shrink.  For a less esoteric example, consider the likely fate of a relatively new profession, radiology.  As Martin Ford explains:

A radiologist is a medical doctor who specializes in interpreting images generated by various medical scanning technologies. Before the advent of modern computer technology, radiologists focused exclusively on X-rays. This has now been expanded to include all types of medical imaging, including CT scans, PET scans, mammograms, etc.

To become a radiologist you need to attend college for four years, and then medical school for another four. That is followed by another five years of internship and residency, and often even more specialized training after that. Radiology is one of the most popular specialties for newly minted doctors because it offers relatively high pay and regular work hours; radiologists generally don’t need to work weekends or handle emergencies.

In spite of the radiologist’s training requirement of at least thirteen additional years beyond high school, it is conceptually quite easy to envision this job being automated. The primary focus of the job is to analyze and evaluate visual images. Furthermore, the parameters of each image are highly defined since they are often coming directly from a computerized scanning device. Visual pattern recognition software is a rapidly developing field that has already produced significant results…

Radiology is already subject to significant offshoring to India and other places. It is a simple matter to transmit digital scans to an overseas location for analysis. Indian doctors earn as little as 10 percent of what American radiologists are paid… Automation will often come rapidly on the heels of offshoring, especially if the job focuses purely on technical analysis with little need for human interaction. Currently, U.S. demand for radiologists continues to expand because of the increase in use of diagnostic scans such as mammograms. However, this seems likely to slow as automation and offshoring advance and become bigger players in the future. The graduating medical students who are now rushing into radiology for its high pay and relative freedom from the annoyances of dealing with actual patients may eventually come to question the wisdom of their decision

Radiologists are far from being the only “high-skill” occupation that is under risk from this trend.  Jobs which involve a high degree of “expert system” knowledge will come under threat from increasingly expert AI systems.  Jobs which involve listening to human speech will come under threat from increasingly accurate voice recognition systems.  And so on.

This leaves two questions:

  1. Can we look forward, as some singularitarians and radical futurists assert, to incorporating increasing technological smarts within our own human nature, allowing us in a sense to merge with the robots of the future?  In that case, a scenario of “the robots will take all our jobs” might change to “substantially enhanced humans will undertake new types of work”
  2. Alternatively, if robots do much more of the work needed within society, how will the transition be handled, to a society in which humans have much more leisure time?

I’ll return to the first of these questions in a subsequent blogpost.  Martin Ford’s book has a lot to say about the second of these questions.  And he recommends a series of ideas for consideration:

  • Without large numbers of well-paid consumers able to purchase goods, the global economy risks going into decline, at the same time as technology has radically improved
  • With fewer people working, there will be much less income tax available to governments.  Taxation will need to switch towards corporation tax and consumption taxes
  • With more people receiving handouts from the state, there’s a risk of loss of many of aspects of economic structure which previously have been thought essential
  • We need to give more thought, now, to ideas for differential state subsidy of different kinds of non-work activity – to incentivise certain kinds of activity.  That way, we’ll be ready for the increasing disturbances placed on our economy by the rise of the robots.

For further coverage of these and related ideas, see Martin Ford’s blog on the subject, http://econfuture.wordpress.com/.

Blog at WordPress.com.