10 October 2015

Technological unemployment – Why it’s different this time

On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:

  • To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
  • To what extent should humans be anxious about this process?

The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:


“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.

Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.

I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.

The master algorithm

That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.


The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:

  • The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
  • The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.

For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.

As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:

  • “Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
  • “Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
  • “Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
  • “Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
  • “Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.

(See slide 6 of this Slideshare presentation. Indeed, take the time to view the full presentation. Better again, read Domingos’ entire book.)

What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.

And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.

Incidentally, Domingos has set out some ambitious goals for what his book will accomplish:

The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.

Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.

People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.

Rise of the robots

There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.

That’s one of the arguments made by perhaps the best writer so far on technological unemployment, Martin Ford. Ford’s recent book “Rise of the Robots: Technology and the Threat of a Jobless Future” builds ably on what previous writers have said.


Here’s a sample of review comments about Ford’s book:

Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument.
Los Angeles Times

If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent.
Financial Times, Summer books 2015, Business, Andrew Hill

[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right.
Daily Beast

Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines.
Fast Company

Well-researched and disturbingly persuasive.
Financial Times

Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future.
—Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think

Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read.
—Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes

If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.

I summarise the takeaway message in this picture, taken from one of my Delta Wisdom workshop presentations:

Tech unemployment curves

  • Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
  • However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
  • Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.

A worked example: a site carpenter

During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?

One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.

My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):

As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.

The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.

What does it mean to be human?

For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:

  • Should these changes be welcomed, rather than feared?
  • What will these forthcoming changes imply for our conception of what it means to be human?

To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.

If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.

Some of these topics were aired at a conference in New York City on 29th September: “The World Summit on Technological Unemployment”, that was run by Jim Clark’s World Technology Network.

Robotic Steel Workers

One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:

We are not facing a future without work. We are facing a future without jobs.

There is a huge difference between the two, and we must start seeing the difference, and making the difference more clear to each other.

In his blogpost “Jobs, Work, and Universal Basic Income”, Santens continues the argument as follows:

When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.

Put another way, extrinsically motivated work is work done before or after an expected payment. It’s an exchange. Intrinsically motivated work is work only made possible by sufficient access to money. It’s a gift.

The difference between these two forms of work cannot be overstated…

Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…

What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.

What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:

The RSA will be advocating support for Basic Income… in response to Technological Unemployment.

(This comment comes about 2/3 of the way through the podcast.)

To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.

Life is being redefined

Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.

Janna Anderson keynote

You can find Anderson’s article here. It starts as follows:

The Robot Takeover is Already Here

The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.

It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.

The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.

By way of summary, I’ll pull out a few sentences from the middle of the article:

One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…

It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…

[Another] thing is certain: Life is being redefined.

Who controls the robots?

Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.

That’s a theme I’ll be addressing next Sunday, 18th October, at a lunchtime session of the 2015 Battle of Ideas. The session is entitled “Man vs machine: Who controls the robots”.


Here’s how the session is described:

From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.

On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…

Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?

 As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.

It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?

In line with my recent essay “Four political futures: which will you choose?”, I’ll be arguing for a technoprogressive approach to control, rather than a technolibertarian one.

Four futures

I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.


Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

3 July 2013

Preparing for driverless vehicles

Filed under: driverless vehicles, futurist, Humanity Plus, robots, safety, sensors, vision, Volvo — David Wood @ 10:56 am

It’s not just Google that is working on autonomous, self-driving cars. Take a look at this recent Atutoblog video showing technology under development by Swedish manufacturer Volvo:

This represents another key step in the incorporation of smart wireless technology into motor vehicles.

Smart wireless technology already has the potential to reduce the number of lives lost in road accidents. A memo last month from the EU commission describes the potential effect of full adoption of the 112 eCall system inside cars:

The 112 eCall automatically dials Europe’s single emergency number 112 in the event of a serious accident and communicates the vehicle’s location to the emergency services. This call to 112, made either automatically by means of the activation of in-vehicle sensors or manually, carries a standardised set of data (containing notably the type and the location of the vehicle) and establishes an audio channel between the vehicle and the most appropriate emergency call centre via public mobile networks.

Using a built-in acceleration sensor, the system detects when a crash has occurred, and how serious it is likely to be. For example, it can detect whether the car has rolled over onto its roof. Then it transmits the information via a built-in wireless SIM. As the EU commission memo explains:

  • In 2012 around 28,000 people were killed and more than 1.5 million injured in 1.1 million traffic accidents on EU roads.
  • Only around 0.7% of vehicles are currently equipped with private eCall systems in the EU, with numbers barely rising. These proprietary systems do not offer EU-wide interoperability or continuity.
  • In addition to the tragedy of loss of life and injury, this also carries an economic burden of around EUR 130 billion in costs to society every year.
  • 112 eCall can speed up emergency response times by 40% in urban areas and 50% in the countryside. Fully deployed, it can save up to 2500 lives a year and alleviate severity of road injuries. In addition, thanks to improved accident management, it is expected to reduce congestion costs caused by traffic accidents.

That’s 9% fewer fatalities, as a result of emergency assistance being contacted more quickly.

But what if the number of accidents could themselves be significantly reduced? Here it’s important to know the predominant factors behind road accidents. A landmark investigation of 700,000 road accidents in the UK over 2005-2009 produced some surprising statistics. As reported by David Williams in the Daily Telegraph,

Vehicle defects are a factor in only 2.8 per cent of fatals, with tyres mostly to blame (1.5 per cent) followed by dodgy brakes (0.7 per cent).

The overriding message? It’s not your car or the “road conditions” that are most likely to kill you. It’s your own driving.

In more detail:

The biggest cause of road accidents in the UK today? The statistics are quite clear on this and it’s “driver error or reaction”. It’s listed by police as a factor in more than 65 per cent of fatal crashes and the heading covers a multitude of driving sins many of which you’re probably on first-name terms with. Topping the charge sheet is failing to look properly (the Smidsy factor – “Sorry mate, I didn’t see you’, relevant in 20.5 per cent of fatals involving driver error), followed by “loss of control” (34 per cent) which, says Greig, often means leaving yourself with “nowhere to go” after entering a bend or other situation, too quickly. Other errors include “poor turn or manoeuvre” (12 per cent) and “failed to judge other person’s path or speed” (11.6 per cent.).

Second biggest cause of fatal accidents, to blame for 31 per cent, is the “injudicious action”, an umbrella term for “travelled too fast for the conditions’ (15.9 per cent of those labelled injudicious), “exceeded speed limit” (13.9 per cent) or “disobeyed give-way or stop sign” (2.1 per cent)?

Third culprit in the daily gamble on who lives and who dies is “behaviour or inexperience” (28 per cent), which covers faults such as “careless, reckless or in a hurry” (17 per cent), “aggressive driving” (8.3 per cent) and “learner/inexperienced” (5.3 per cent).

The fourth main category is “impairment or distraction” (to blame for 19.6 per cent of fatal accidents) covering “alcohol” (a factor in 9.6 per cent of fatal accidents) and “distraction in vehicle” (2.6 per cent).

(The numbers add up to more than 100% because accidents are often attributed to more than one factor.)

These statistics give strength to the remark by Eric Schmidt, Executive Chairman of Google:

Your car should drive itself. It’s amazing to me that we let humans drive cars. It’s a bug that cars were invented before computers.

This suggestion commonly gives rise to three objections:

  1. The technology will never become good enough
  2. Even if the raw technology inside cars becomes better and better, there will need to be lots of changes in roadways, which will take a very long time to achieve
  3. Even if the technology did become good enough, legal systems will never catch up. Who’s going to accept liability for crashes caused by bugs in software?

The first objection is heard less often these days. As noted in a 2011 New York Times interview by Erik Brynjolfsson and Andrew P. McAfee of the M.I.T. Center for Digital Business, and authors of the book Race Against the Machine,

In 2004, two leading economists, Frank Levy and Richard J. Murnane, published “The New Division of Labor,”which analyzed the capabilities of computers and human workers. Truck driving was cited as an example of the kind of work computers could not handle, recognizing and reacting to moving objects in real time.

But last fall, Google announced that its robot-driven cars had logged thousands of miles on American roads with only an occasional assist from human back-seat drivers. The Google cars are but one sign of the times.

The third objection will surely fall away soon too. There are already mechanisms whereby some degree of liability can be accepted by car manufacturers, in cases where software defects (for example, in braking and accelerating systems) contribute to accidents. Some examples are covered in the CNN Money review “Toyota to pay $1.1 billion in recall case”.

Another reason the third objection will fall away is because the costs of not changing – that is, of sticking with human drivers – may be much larger than the costs of adopting driverless vehicles. So long as we continue to allow humans to drive cars, there will continue to be driver-induced accidents, with all the physical and social trauma that ensues.

That still leaves the second objection: the other changes in the environment that will need to take place, before driverless vehicles can be adopted more widely. And what other changes will take place, possibly unexpectedly, once driverless cars are indeed adopted?

That’s one of the topics that will be covered in this Saturday’s London Futurists event: The future of transport: Preparing for driverless vehicles? With Nathan Koren.

Nathan_Koren_PhotoAs explained by the speaker at the event, Nathan Koren,

The robots have arrived. Driverless transport pods are now in operation at Heathrow Terminal 5 and several other locations around the world. Driver-assist technologies are becoming commonplace. Many believe that fully driverless cars will be commercially available before the decade is out. But what will the broader impact of driverless transport be?

Automobiles were once called “horseless carriages,” as though the lack of a horse was their most important feature. In reality, they changed the way we work, live, and play; changed the way we design cities; and altered the global economy, political landscape, and climate.

It will be the same with driverless vehicles: we can expect their impact to be go far beyond simply being able to take our hands off the wheel.

This presentation and discussion goes into depth about how automated transport will affect our lives and reshape the the world’s cities.

Nathan is a London-based, American-born architect, transport planner, and entrepreneur. He is widely recognised as a leading authority on Automated Transit Networks, and designed what is scheduled to become the world’s first urban-scale system, in Amritsar, India. He works as a Transport Technology & Planning Consultant for Capita Symonds, and recently founded Podaris, a cloud-based platform for the collaborative design of Automated Transit Networks. Nathan holds an Architecture degree from Arizona State University, and an MBA from the University of Oxford.

I hope to see some readers of this blog, who are based in or near London, at the meeting this Saturday. It’s an important topic!

For additional background inspiration, I recommend the three short videos in the article “The future of travel: Transportation confronts its ‘Kodak moment'”. (Thanks to Nathan for drawing this article to my attention.)

Speakers in these videos talk about the industries that are liable to radical disruption (and perhaps irrelevance) due to the rise of collision-proof driverless vehicles. The airbag industry is one; car collision insurance might be another. I’m sure you can think of more.

13 June 2013

Previewing Global Future 2045

Filed under: futurist, GF2045, robots — David Wood @ 4:32 am

The website for this weekend’s Global Future 2045 international congress has the following bold headline:

Towards a new strategy for human evolution


By many measures, the event is poised to be a breakthrough gathering: check the list of eminent speakers and the provocative list of topics to be addressed.

The congress is scheduled to start at 9am on Saturday morning. However, I’ve been chatting with some of the attendees, and we’ve agreed we’ll meet the previous evening, to help kick-start the conversation.

The venue we’ve agreed is Connolly’s Pub and Restaurant. Note that there are several different buildings: we’ll be in the one at 121 W 45th St, from 6.30pm onwards.

Anyone who is in New York to attend the congress is welcome to join us. To find us inside the building:

  • Look for a table with a futurist book on it (“Abundance” by Peter Diamandis)
  • Alternatively, ring my temporary US mobile number, 1 347-562-3920, or that of Chris Smedley, 1 773-432-5712.

There’s no fixed agenda. However, here are several topics that people might want to discuss:

  1. GF2045 foresees the potential future merger of humans and robots (“avatars”). How credible is this vision?
  2. What hard questions are people inclined to ask, to some of the speakers at the event?
  3. Some speakers at the conference believe that mind is deeply linked to quantum effects or other irreducible processes. Will progress with technology and/or philosophy ever resolve these questions?
  4. Speakers at GF2045 include religious and spiritual leaders. Was that a good decision?
  5. What should we and can we do, as interested global citizens, to help support the positive goals of the GF2045 project?
  6. GF2045 took place in Moscow in 2012 and in New York in 2013. Where should it be held in 2014?

I’m open to other suggestions!



I’ll also be involved in a couple of post-GF2045 review meetings:

If you’d like to attend either of these reviews, please click on the corresponding link above and register.

16 June 2012

Beyond future shock

Filed under: alienation, books, change, chaos, futurist, Humanity Plus, rejuveneering, robots, Singularity, UKH+ — David Wood @ 3:10 pm

They predicted the “electronic frontier” of the Internet, Prozac, YouTube, cloning, home-schooling, the self-induced paralysis of too many choices, instant celebrities, and the end of blue-collar manufacturing. Not bad for 1970.

That’s the summary, with the benefit of four decades of hindsight, given by Fast Company writer Greg Lindsay, of the forecasts made in the 1970 bestseller “Future Shock” by husband-and-wife authors Alvin and Heidi Toffler.

As Lindsay comments,

Published in 1970, Future Shock made its author Alvin Toffler – a former student radical, welder, newspaper report and Fortune editor – a household name. Written with his wife (and uncredited co-author), Heidi Toffler, the book was The World Is Flat of its day, selling 6 million copies and single-handedly inventing futurism…

“Future shock is the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time”, the pair wrote.

And quoting Deborah Westphal, the managing partner of Toffler Associates, in an interview at an event marking the 40th anniversary of the publication of Future Shock, Lindsay notes the following:

In Future Shock, the Tofflers hammered home the point that technology, culture, and even life itself was evolving too fast for governments, policy-makers and regulators to keep up. Forty years on, that message hasn’t changed. “The government needs to understand the dependencies and the convergence of networks through information,” says Westphal. “And there still needs to be some studies done around rates of change and the synchronization of these systems. Business, government, and organizational structures need to be looked at and redone. We’ve built much of the world economy on an industrial model, and that model doesn’t work in an information-centric society. That’s probably the greatest challenge we still face -understanding the old rules don’t apply for the future.”

Earlier this week, another book was published, that also draws on Future Shock for inspiration.  Again, the authors are a husband-and-wife team, Parag and Ayesha Khanna.  And again, the book looks set to redefine key aspects of the futurist endeavour.

This new book is entitled “Hybrid Reality: Thriving in the Emerging Human-Technology Civilization“.  The Khannas refer early on to the insights expressed by the Tofflers in Future Shock:

The Tofflers’ most fundamental insight was that the pace of change has become as important as the content of change… The term Future Shock was thus meant to capture our intense anxiety in the face of technology’s seeming ability to accelerate time. In this sense, technology’s true impact isn’t just physical or economic, but social and psychological as well.

One simple but important example follows:

Technologies such as mobile phones can make us feel empowered, but also make us vulnerable to new pathologies like nomophobia – the fear of being away from one’s mobile phone. Fifty-eight percent of millennials would rather give up their sense of smell than their mobile phone.

As befits the theme of speed, the book is a fast read. I downloaded it onto my Kindle on the day of its publication, and have already read it all the way through twice. It’s short, but condensed. The text contains many striking turns of phrase, loaded with several layers of meaning, which repay several rethinks. That’s the best kind of sound-bite.

Despite its short length, there are too many big themes in the book for me to properly summarise them here. The book portrays an optimistic vision, alongside a series of challenges and risks. As illustrations, let me pick out a selection of phrases, to convey some of the flavour:

The cross-pollination of leading-edge sectors such as information technology, biotechnology, pervasive computing, robotics, neuroscience, and nanotechnology spells the end of certain turf wars over nomenclature. It is neither the “Bio Age” nor the “Nano Age” nor the “Neuro Age”, but the hybrid of all of these at the same time…

Our own relationship to technology is moving beyond the instrumental to the existential. There is an accelerating centripetal dance between what technologies are doing outside us and inside us. Externally, technology no longer simply processes our instructions on a one-way street. Instead, it increasingly provides intelligent feedback. Internally, we are moving beyond using technology only to dominate nature towards making ourselves the template for technology, integrating technologies within ourselves physically. We don’t just use technology; we absorb it

The Hybrid Age is the transition period between the Information Age and the moment of Singularity (when machine surpass human intelligence) that inventor Ray Kurzweil estimates we may reach by 2040 (perhaps sooner). The Hybrid Age is a liminal phase in which we cross the threshold toward a new mode of arranging global society…

You may continue to live your life without understanding the implications of the still-distant Singularity, but you should not underestimate how quickly we are accelerating into the Hybrid Age – nor delay in managing this transition yourself

The dominant paradigm to explain global change in the Hybrid Age will be geotechnnology. Technology’s role in shaping and reshaping the prevailing order, and accelerating change between orders, forces us to rethink the intellectual hegemony of geopolitics and geoeconomics…

It is geotechnology that is the underlying driver of both: Mastery in the leading technology sectors of any era determines who leads in geoeconomics and dominates in geopolitics…

The shift towards a geotechnology paradigm forces us to jettison centuries of foundational assumptions of geopolitics. The first is our view on scale: “Bigger is better” is no longer necessarily true. Size can be as much a liability as an asset…

We live and die by our Technik, the capacity to harness emerging technologies to improve our circumstances…

We will increasingly differentiate societies on the basis not of their regime type or income, but of their capacity to harness technology. Societies that continuously upgrade their Technik will thrive…

Meeting the grand challenge of improving equity on a crowded planet requires spreading Technik more than it requires spreading democracy

And there’s lots more, applying the above themes to education, healthcare, “better than new” prosthetics, longevity and rejuvenation, 3D printing, digital currencies, personal entrepreneurship and workforce transformation, the diffusion of authority, the rise of smart cities and their empowered “city-zens”, augmented reality and enhanced personal avatars, robots and “avoiding robopocalypse”, and the prospect for a forthcoming “Pax Technologica”.

It makes me breathless just remembering all these themes – and how they time and again circle back on each other.

Footnote: Readers who are in the vicinity of London next Saturday (23rd June) are encouraged to attend the London Futurist / Humanity+ UK event “Hybrid Reality, with Ayesha Khanna”. Click on the links for more information.

29 July 2011

Towards a mind-stretching weekend in New York

Filed under: AGI, futurist, leadership, nanotechnology, robots, Singularity — David Wood @ 9:19 pm

I’ve attended the annual Singularity Summit twice before – in 2008 and in 2009.  I’ve just registered to attend the 2011 event, which is taking place in New York on 15th-16th October.  Here’s why.

On both previous occasions, the summits featured presentations that gave me a great deal to think about, on arguably some of the most significant topics in human history.  These topics include the potential emergence, within the lifetimes of many people alive today, of:

  • Artificial intelligence which far exceeds the capabilities of even the smartest group of humans
  • Robots which far exceed the dexterity, balance, speed, strength, and sensory powers of even the best human athletes, sportspeople, or soldiers
  • Super-small nanobots which can enter the human body and effect far more thorough repairs and enhancements – to both body and mind – than even the best current medical techniques.

True, at the previous events, there were some poor presentations too – which is probably inevitable given the risky cutting-edge nature of the topics being covered.  But the better presentations far outweighed the worse ones.

And as well as the presentations, I greatly enjoyed the networking with the unusual mix of attendees – people who had taken the time to explore many of the fascinating hinterlands of modern technology trends.  If someone is open-minded enough to give serious thought to the ideas listed above, they’re often open-minded enough to entertain lots of other unconventional ideas too.  I frequently found myself in disagreement with these attendees, but the debate was deeply refreshing.

Take a look at the list of confirmed speakers so far: which of these people would you most like to bounce ideas off?

The summit registration page is now open.  As I type these words, that page states that the cost of tickets is going to increase after 31 July.  That’s an argument for registering sooner rather than later.

To provide more information, here’s a copy of the press release for the event:

Singularity Summit 2011 in New York City to Explore Watson Victory in Jeopardy

New York, NY This October 15-16th in New York City, a TED-style conference gathering innovators from science, industry, and the public will discuss IBM’s ‘Watson’ computer and other exciting developments in emerging technologies. Keynote speakers at Singularity Summit 2011 include Jeopardy! champion Ken Jennings and famed futurist and inventor Ray Kurzweil. After losing to an IBM computer in Jeopardy!, Jennings wrote, “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

In February, Watson defeated two human champions in Jeopardy!, the game show famous for its mind-bending trivia questions. Surprising millions of TV viewers, Watson took down champions Ken Jennings and Brad Rutter for the $1 million first prize. Facing defeat on the final show, competitor Ken Jennings jokingly wrote in parentheses on his last answer: “I for one welcome our new computer overlords.” Besides Watson, the Singularity Summit 2011 will feature speakers on robotics, nanotechnology, biotechnology, futurism, and other cutting-edge technologies, and is the only conference to focus on the technological Singularity.

Responding to Watson’s victory, leading computer scientist Ray Kurzweil said, “Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.” In Kurzweil’s view, the combination of language understanding and pattern recognition that Watson displays would make its descendants “far superior to a human”. Kurzweil is known for predicting computers whose conversations will be indistinguishable from people by 2029.

Beyond artificial intelligence, the Singularity Summit will also focus on high-tech and where it is going. Economist Tyler Cowen will examine the economic impacts of emerging technologies. Cowen argued in his recent book The Great Stagnation that modern society is on a technological plateau where “a lot of our major innovations are springing up in sectors where a lot of work is done by machines, not by human beings.” Tech entrepreneur and investor Peter Thiel, who sits on the board of directors of Facebook, will share his thoughts on innovation and jumpstarting the economy.

Other speakers include MIT cosmologist Max Tegmark, Allen Brain Institute chief scientist Christof Koch, co-founder of Skype Jaan Tallinn, robotics professors James McLurkin and Robin Murphy, Bionic Builders host Casey Pieretti, the MIT Media Lab’s Riley Crane, MIT polymath Alexander Wissner-Gross, filmmaker and television personality Jason Silva, and Singularity Institute artificial intelligence researcher Eliezer Yudkowsky.

7 May 2011

Workers beware: the robots are coming

Filed under: books, challenge, disruption, Economics, futurist, robots — David Wood @ 9:07 pm

What’s your reaction to the suggestion that, at some stage in the next 10-30 years, you will lose your job to a robot?

Here, by the word “robot”, I’m using shorthand for “automation” – a mixture of improvements in hardware and software. The suggestion is that automation will continue to improve until it reaches the stage when it is cheaper for your employer to use computers and/or robots to do your job, than it is to continue employing you. This change has happened in the past with all manner of manual and/or repetitive work. Could it happen to you?

People typically have one of three reactions to this suggestion:

  1. “My job is too complex, too difficult, too human-intense, etc, for a robot to be able to do it in the foreseeable future. I don’t need to worry.”
  2. “My present job may indeed be outsourced to robots, but over the same time period, new kinds of job will be created, and I’ll be able to do one of these instead. I don’t need to worry.”
  3. “When the time comes that robots can do all the kinds of work that I can do, better than me, we’ll be living in an economy of plenty. I won’t actually need to work – I’ll be happy to enjoy lots more leisure time. I don’t need to worry.”

Don’t need to worry? Think again. That’s effectively the message in Martin Ford’s 2009 book “The lights in the tunnel“. (If you haven’t heard of that book, perhaps it’s because the title is a touch obscure. After all, who wants to read about “lights in a tunnel”?)

The subtitle gives a better flavour of the content: “Automation, accelerating technology, and the economy of the future“. And right at the top of the front cover, there’s yet another subtitle: “A journey to the economic landscape of the coming decades“. But neither of these subtitles conveys the challenge which the book actually addresses. This is a book that points out real problems with increasing automation:

  • Automation will cause increasing numbers of people to lose their current jobs
  • Accelerating automation will mean that robots can quickly become able to do more jobs – their ability to improve and learn will far outpace that of human workers – so the proportion of people who are unemployed will grow and grow
  • Without proper employment, a large proportion of consumers will be deprived of income, and will therefore lack the spending power which is necessary for the continuing vibrancy of the economy
  • Even as technology improves, the economy will stagnate, with disastrous consequences
  • This is likely to happen long before technologies such as nanotech have reached their full potential – so that any ideas of us existing at that time in an economy of plenty are flawed.

Although the author could have chosen a better title for his book, the contents are well argued, and easy to read. They deserve a much wider hearing.  They underscore the important theme that the process of ongoing technological improvement is far from being an inevitable positive.

There are essentially two core threads to the book:

  • A statement of the problem – this effectively highlights issues with each of the reactions 1-3 listed earlier;
  • Some tentative ideas for a possible solution.

The book looks backwards in history, as well as forwards to the future. For example, it includes interesting short commentaries on both Marx and Keynes. One of the most significant backward glances considers the case of the Luddites – the early 19th century manufacturing workers in the UK who feared that their livelihoods would be displaced by factory automation. Doesn’t history show us that such fears are groundless? Didn’t the Luddites (and their descendants) in due course find new kinds of employment? Didn’t automation create new kinds of work, at the same time as it destroyed some existing kinds of work? And won’t that continue to happen?

Well, it’s a matter of pace.  One of most striking pictures in the book is a rough sketch of the variation over time of the comparative ability of computers and humans to perform routine jobs:

As Martin Ford explains:

I’ve chosen an arbitrary point on the graph to indicate the year 1812. After that year, we can reasonably assume that human capability continued to rise quite steeply until we reach modern times. The steep part of the graph reflects dramatic improvements to our overall living conditions in the world’s more advanced countries:

  • Vastly improved nutrition, public health, and environmental regulations have allowed us to remain relatively free from disease and reach our full biological potential
  • Investment in literacy and in primary and secondary education, as well as access to college and advanced education for some workers, has greatly increased overall capability
  • A generally richer and more varied existence, including easy access to books, media, new technologies and the ability to travel long distances, has probably had a positive impact on our ability to comprehend and deal with complex issues.

A free download of the entire book is available from the author’s website.  I’ll leave it to you to evaluate the author’s arguments for why the two curves in this sketch have the shape that they do.  To my mind, these arguments have a lot of merit.

The point where these two curves cross – potentially a few decades into the future – will represent a new kind of transition point for the economy – perhaps the mother of all economic disruptions.  Yes, there will still be some new jobs created.  Indeed, in a blogpost last year, “Accelerating automation and the future of work“, I listed 20 new occupations that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

However, the lifetimes of these jobs (before they too can be handled by improved robots) will shrink and shrink.  For a less esoteric example, consider the likely fate of a relatively new profession, radiology.  As Martin Ford explains:

A radiologist is a medical doctor who specializes in interpreting images generated by various medical scanning technologies. Before the advent of modern computer technology, radiologists focused exclusively on X-rays. This has now been expanded to include all types of medical imaging, including CT scans, PET scans, mammograms, etc.

To become a radiologist you need to attend college for four years, and then medical school for another four. That is followed by another five years of internship and residency, and often even more specialized training after that. Radiology is one of the most popular specialties for newly minted doctors because it offers relatively high pay and regular work hours; radiologists generally don’t need to work weekends or handle emergencies.

In spite of the radiologist’s training requirement of at least thirteen additional years beyond high school, it is conceptually quite easy to envision this job being automated. The primary focus of the job is to analyze and evaluate visual images. Furthermore, the parameters of each image are highly defined since they are often coming directly from a computerized scanning device. Visual pattern recognition software is a rapidly developing field that has already produced significant results…

Radiology is already subject to significant offshoring to India and other places. It is a simple matter to transmit digital scans to an overseas location for analysis. Indian doctors earn as little as 10 percent of what American radiologists are paid… Automation will often come rapidly on the heels of offshoring, especially if the job focuses purely on technical analysis with little need for human interaction. Currently, U.S. demand for radiologists continues to expand because of the increase in use of diagnostic scans such as mammograms. However, this seems likely to slow as automation and offshoring advance and become bigger players in the future. The graduating medical students who are now rushing into radiology for its high pay and relative freedom from the annoyances of dealing with actual patients may eventually come to question the wisdom of their decision

Radiologists are far from being the only “high-skill” occupation that is under risk from this trend.  Jobs which involve a high degree of “expert system” knowledge will come under threat from increasingly expert AI systems.  Jobs which involve listening to human speech will come under threat from increasingly accurate voice recognition systems.  And so on.

This leaves two questions:

  1. Can we look forward, as some singularitarians and radical futurists assert, to incorporating increasing technological smarts within our own human nature, allowing us in a sense to merge with the robots of the future?  In that case, a scenario of “the robots will take all our jobs” might change to “substantially enhanced humans will undertake new types of work”
  2. Alternatively, if robots do much more of the work needed within society, how will the transition be handled, to a society in which humans have much more leisure time?

I’ll return to the first of these questions in a subsequent blogpost.  Martin Ford’s book has a lot to say about the second of these questions.  And he recommends a series of ideas for consideration:

  • Without large numbers of well-paid consumers able to purchase goods, the global economy risks going into decline, at the same time as technology has radically improved
  • With fewer people working, there will be much less income tax available to governments.  Taxation will need to switch towards corporation tax and consumption taxes
  • With more people receiving handouts from the state, there’s a risk of loss of many of aspects of economic structure which previously have been thought essential
  • We need to give more thought, now, to ideas for differential state subsidy of different kinds of non-work activity – to incentivise certain kinds of activity.  That way, we’ll be ready for the increasing disturbances placed on our economy by the rise of the robots.

For further coverage of these and related ideas, see Martin Ford’s blog on the subject, http://econfuture.wordpress.com/.

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

7 March 2009

The China Brain project and the future of industry

Filed under: AGI, China, robots — David Wood @ 8:15 pm

An intriguing note popped up on my Twitter feed a couple of hours ago. It was from James Clement, owner and manager at Betterhumans LLC:

with U.S. economy hurting, AI programs may move to China to work with Hugo de Garis. He sees house robots as biggest industry in 20 – 30 yrs

And slightly earlier:

de Garis has already received 10.5 million RMB for the China Brain Project. Basically 10k’s of neural nets for Minsky style “society of mind”

James is attending the AGI-09 conference in Artificial General Intelligence, which is taking place at Arlington, Virginia.

Casting my eye over the schedule for this conference, I admit to a big pang of envy that I’m not attending!

As James says, one of the most significant talks there could be the one by Hugo de Garis. The schedule has a link to a PDF authored in October last year. Here’s a couple of extracts from the paper:

The “China Brain Project”, based at Xiamen University, is a 4 year (2008-2011), 10.5 million RMB, 20 person, research project to design and build China’s first artificial brain (AB). An artificial brain is defined here to be a “network of (evolved neural) networks”, where each neural net(work) module performs some simple task (e.g. recognizes someone’s face, lifts an arm of a robot, etc), somewhat similar to Minsky’s idea of a “society of mind”, i.e. where large numbers of unintelligent “agents” link up to create an intelligent “society of agents”. 10,000s of these neural net modules are evolved rapidly, one at a time, in special (FPGA based) hardware and then downloaded into a PC (or more probably, a supercomputer PC cluster). Human “BAs” (brain architects) then connect these evolved modules according to their human designs to architect artificial brains…

The first author [de Garis] thinks that the artificial brain industry will be the world’s biggest by about 2030, because artificial brains will be needed to control the home robots that everyone will be prepared to spend big money on, if they become genuinely intelligent and hence useful (e.g. baby sitting the kids, taking the dog for a walk, cleaning the house, washing the dishes, reading stories, educating its owners etc). China has been catching up fast with the western countries for decades. The first author thinks that China should now aim to start leading the world (given its huge population, and its 3 times greater average economic growth rate compared to the US) by aiming to dominate the artificial brain industry.

If it’s true that the downturn in the economy will cause a relocation of AGI research personnel from other countries to China, this could turn out to be one of the most significant unforeseen consequences of the downturn.

Older Posts »

Blog at WordPress.com.