dw2

29 September 2018

Preview: Assessing the risks from super intelligent AI

Filed under: AGI, presentation — Tags: , , , , , — David Wood @ 1:14 am

The following video gives a short preview of the Funzing talk on “Assessing the risks from super-intelligent AI” that I’ll be giving shortly:

Note: the music in this video is “Berlin Approval” from Jukedeck, a company that is “building tools that use cutting-edge musical artificial intelligence to assist creativity”. Create your own at http://jukedeck.com.

Transcript of the video:

Welcome. My name is David Wood, and I’d like to tell you about a talk I give for Funzing.

This talk looks at the potential rapid increase in the ability of Artificial Intelligence, also known as AI.

AI is everywhere nowadays, and it is, rightly, getting a lot of attention. But the AI of a few short years in the future could be MUCH more powerful than today’s AI. Is that going to be a good thing, or a bad thing?

Some people, like the entrepreneur Elon Musk, or the physicist Stephen Hawking, say we should be very worried about the growth of super artificial intelligence. It could be the worst thing that ever happened to humanity, they say. Without anyone intending it, we could all become the victims of some horrible bugs or design flaws in super artificial intelligence. You may have heard of the “blue screen of death”, when Windows crashes. Well, we could all be headed to some kind of “blue screen of megadeath”.

Other people, like the Facebook founder Mark Zuckerberg, say that it’s “irresponsible” to worry about the growth of super AI. Let’s hurry up and build better AI, they say, so we can use that super AI to solve major outstanding human problems like cancer, climate change, and economic inequality.

A third group of people say that discussing the rise of super AI is a distraction and it’s premature to do so now. It’s nothing we need to think about any time soon, they say. Instead, there are more pressing short-term issues that deserve our attention, like hidden biases in today’s AI algorithms, or the need to retrain people to change their jobs more quickly in the wake of the rise of automation.

In my talk, I’ll be helping you to understand the strengths and weaknesses of all three of these points of view. I’ll give reasons why, in as little as ten years, we could, perhaps, reach a super AI that goes way beyond human capability in every aspect. I’ll describe five ways in which that super AI could go disastrously wrong, due to lack of sufficient forethought and coordination about safety. And I’ll be reviewing some practical initiatives for how we can increase the chance of the growth of super AI being a very positive development for humanity, rather than a very negative one.

People who have seen my talk before have said that it’s easy to understand, it’s engaging, it’s fascinating, and it provides “much to think about”.

What makes my approach different to others who speak on this subject is the wide perspective I can apply. This comes from the twenty five years in which I was at the heart of the mobile computing and smartphone industries, during which time I saw at close hand the issues with developing and controlling very complicated system software. I also bring ten years of experience more recently, as chair of London Futurists, in running meetings at which the growth of AI has often been discussed by world-leading thinkers.

I consider myself a real-world futurist: I take the human and political dimensions of technology very seriously. I also consider myself to be a radical futurist, since I believe that the not-so-distant future could be very different from the present. And we need to think hard about it beforehand, to decide if we like that outcome or not.

The topic of super AI is too big and important to leave to technologists, or to business people. There are a lot of misunderstandings around, and my talk will help you see the key issues and opportunities more clearly than before. I look forward to seeing you there! Thanks for listening.

20 July 2018

Christopher Columbus and the surprising future of AI

Filed under: AGI, predictability, Singularity — Tags: , , , , — David Wood @ 5:49 pm

There are plenty of critics who are sceptical about the future of AI. The topic has been over-hyped, say these critics. According to these critics, we don’t need to be worried about the longer-term repercussions of AI with superhuman capabilities. We’re many decades – perhaps centuries – from anything approaching AGI (artificial general intelligence) with skills in common sense reasoning matching (or surpassing) that of humans. As for AI destroying jobs, that, too, is a false alarm – or so the critics insist. AI will create at least as many jobs as it destroys.

In my previous blog post, Serious question over PwC’s report on the impact of AI on jobs, I offered some counters to these critics. To my mind, this is no time for complacency: AI could accelerate in its capabilities, and take us by surprise. The kinds of breakthroughs that, in a previous era, might have been expected to take many decades, could actually take place in just a few short years. Rather than burying our head in the sands, denying the possibility of any such acceleration, we need to pay more attention to the trends of technological change and the potential for disruptive new innovations.

The Christopher Columbus angle

Overnight, I’ve been reminded of an argument that I’ve used previously – towards the end of a rather long blogpost. It’s the argument that critics of the future of AI are similar to the critics of Christopher Columbus – the people who said, before his 1492 voyage across the Atlantic in search of a westerly route to Asia, that the effort was bound to be a bad investment.

Bear with me while I retell this analogy.

For years, Columbus tried to drum up support for what most people considered to be a hare-brained scheme. Most observers concluded that Columbus had fallen victim to a significant mistake – he estimated that the distance from the Canary Islands (off the coast of Morocco) to Japan was around 3,700 km, whereas the generally accepted figure was closer to 20,000 km. Indeed, the true size of the sphere of the Earth had been known since the 3rd century BC, due to a calculation by Eratosthenes, based on observations of shadows at different locations.

Accordingly, when Columbus presented his bold proposal to courts around Europe, the learned members of the courts time and again rejected the idea. The effort would be hugely larger than Columbus supposed, they said. It would be a fruitless endeavour.

Columbus, an autodidact, wasn’t completely crazy. He had done a lot of his own research. However, he was misled by a number of factors:

  • Confusion between various ancient units of distance (the “Arabic mile” and the “Roman mile”)
  • How many degrees of latitude the Eurasian landmass occupied (225 degrees versus 150 degrees)
  • A speculative 1474 map, by the Florentine astronomer Toscanelli, which showed a mythical island “Antilla” located to the east of Japan (named as “Cippangu” in the map).

You can read the details in the Wikipedia article on Columbus, which provides numerous additional reference points. The article also contains a copy of Toscanelli’s map, with the true location of the continents of North and South America superimposed for reference.

No wonder Columbus thought his plan might work after all. Nevertheless, the 1490s equivalents of today’s VCs kept saying “No” to his pitches. Finally, spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to take the risk of supporting his adventure. After stopping in the Canaries to restock, the Nina, the Pinta, and the Santa Maria set off westward. Five weeks later, the crew spotted land, in what we now call the Bahamas. And the rest is history.

But it wasn’t the history expected by Columbus, or by his backers, or by his critics. No-one had foreseen that a huge continent existed in the oceans in between Europe and Japan. None of the ancient writers – either secular or religious – had spoken of such a continent. Nevertheless, once Columbus had found it, the history of the world proceeded in a very different direction – including mass deaths from infectious diseases transmitted from the European sailors, genocide and cultural apocalypse, and enormous trade in both goods and slaves. In due course, it would the the ingenuity and initiatives of people subsequently resident in the Americas that propelled humans beyond the Earth’s atmosphere all the way to the moon.

What does this have to do with the future of AI?

Rational critics may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

Just as the contemporaries of Columbus erred in presuming they already knew all the large features of the earth’s continents (after all: if America really existed, surely God would have written about it in the Bible…), modern-day critics of AI can err in presuming they already know all the large features of the landscape of possible artificial minds.

When contemplating the space of all possible minds, some humility is in order. We cannot foretell in advance what configurations of intelligence are possible. We don’t know what may happen, if separate modules of reasoning are combined in innovative ways. After all, there are many aspects of the human mind which are still in doubt.

When critics say that it is unlikely that present-day AI mechanisms will take us all the way to AGI, they are very likely correct. But it would be a horrendous error to draw the conclusion that meaningful new continents of AI capability are inevitably still the equivalent of 20,000 km into the distance. The fact is, we simply don’t know. And for that reason, we should keep an open mind.

One day soon, indeed, we might read news of some new “AUI” having been discovered – some Artificial Unexpected Intelligence, which changes history. It won’t be AGI, but it could have all kinds of unexpected consequences.

Beyond the Columbus analogy

Every analogy has its drawbacks. Here are three ways in which the discovery of an AUI could be different from the discovery by Columbus of America:

  1. In the 1490s, there was only one Christopher Columbus. Nowadays, there are scores (perhaps hundreds) of schemes underway to try to devise new models of AI. Many of these are proceeding with significant financial backing.
  2. Whereas the journey across the Atlantic (and, eventually, the Pacific) could be measured by a single variable (latitude), the journey across the vast multidimensional landscape of artificial minds is much less predictable. That’s another reason to keep an open mind.
  3. Discovering an AUI could drastically transform the future of exploration in the landscape of artificial minds. Assisted by AUI, we might get to AGI much quicker than without it. Indeed, in some scenarios, it might take only a few months after we reach AUI for us (now going much faster than before) to reach AGI. Or days. Or hours.

Footnote

If you’re in or near Birmingham on 11th September, I’ll be giving a Funzing talk on how to assess the nature of the risks and opportunities from superhuman AI. For more details, see here.

 

7 December 2017

The super-opportunities and super-risks of super-AI

Filed under: AGI, Events, risks, Uncategorized — Tags: , , — David Wood @ 7:29 pm

2017 has seen more discussion of AI than any preceding year.

There has even been a number of meetings – 15, to be precise – in the UK Houses of Parliament, of the APPG AI – an “All-Party Parliamentary Group on Artificial Intelligence”.

According to its website, the APPG AI “was set up in January 2017 with the aim to explore the impact and implications of Artificial Intelligence”.

In the intervening 11 months, the group has held 7 evidence meetings, 4 advisory group meetings, 2 dinners, and 2 receptions. 45 different MPs, along with 7 members of the House of Lords and 5 parliamentary researchers, have been engaged in APPG AI discussions at various times.

APPG-AI

Yesterday evening, at a reception in Parliament’s Cholmondeley Room & Terrace, the APPG AI issued a 12 page report with recommendations in six different policy areas:

  1. Data
  2. Infrastructure
  3. Skills
  4. Innovation & entrepreneurship
  5. Trade
  6. Accountability

The headline “key recommendation” is as follows:

The APPG AI recommends the appointment of a Minister for AI in the Cabinet Office

The Minister would have a number of different responsibilities:

  1. To bring forward the roadmap which will turn AI from a Grand Challenge to a tool for untapping UK’s economic and social potential across the country.
  2. To lead the steering and coordination of: a new Government Office for AI, a new industry-led AI Council, a new Centre for Data Ethics and Innovation, a new GovTech Catalyst, a new Future Sectors Team, and a new Tech Nation (an expansion of Tech City UK).
  3. To oversee and champion the implementation and deployment of AI across government and the UK.
  4. To keep public faith high in these emerging technologies.
  5. To ensure UK’s global competitiveness as a leader in developing AI technologies and capitalising on their benefits.

Overall I welcome this report. It’s a definite step in the right direction. Via a programme of further evidence meetings and workshops planned throughout 2018, I expect real progress can be made.

Nevertheless, it’s my strong belief that most of the public discussion on AI – including the discussions at the APPG AI – fail to appreciate the magnitude of the potential changes that lie ahead. There’s insufficient awareness of:

  • The scale of the opportunities that AI is likely to bring – opportunities that might better be called “super-opportunities”
  • The scale of the risks that AI is likely to bring – “super-risks”
  • The speed at which it is possible (though by no means guaranteed) that AI could transform itself via AGI (Artificial General Intelligence) to ASI (Artificial Super Intelligence).

These are topics that I cover in some of my own presentations and workshops. The events organisation Funzing have asked me to run a number of seminars with the title “Assessing the risks from superintelligent AI: Elon Musk vs. Mark Zuckerberg…”

DW Dec Funzing Singularity v2

The reference to Elon Musk and Mark Zuckerberg reflects the fact that these two titans of the IT industry have spoken publicly about the advent of superintelligence, taking opposing views on the balance of opportunity vs. risk.

In my seminar, I take the time to explain their differing points of view. Other thinkers on the subject of AI that I cover include Alan Turing, IJ Good, Ray Kurzweil, Andrew Ng, Eliezer Yudkowsky, Stuart Russell, Nick Bostrom, Isaac Asimov, and Jaan Tallinn. The talk is structured into six sections:

  1. Introducing the contrasting ideas of Elon Musk and Mark Zuckerberg
  2. A deeper dive into the concepts of “superintelligence” and “singularity”
  3. From today’s AI to superintelligence
  4. Five ways that powerful AI could go wrong
  5. Another look at accelerating timescales
  6. Possible responses and next steps

At the time of writing, I’ve delivered this Funzing seminar twice. Here’s a sampling of the online reviews:

Really enjoyed the talk, David is a good presenter and the presentation was very well documented and entertaining.

Brilliant eye opening talk which I feel very effectively conveyed the gravity of these important issues. Felt completely engaged throughout and would highly recommend. David was an excellent speaker.

Very informative and versatile content. Also easy to follow if you didn’t know much about AI yet, and still very insightful. Excellent Q&A. And the PowerPoint presentation was of great quality and attention was spent on detail putting together visuals and explanations. I’d be interested in seeing this speaker do more of these and have the opportunity to go even more in depth on specific aspects of AI (e.g., specific impact on economy, health care, wellbeing, job market etc). 5 stars 🙂

Best Funzing talk I have been to so far. The lecture was very insightful. I was constantly tuned in.

Brilliant weighing up of the dangers and opportunities of AI – I’m buzzing.

If you’d like to attend one of these seminars, three more dates are in my Funzing diary:

Click on the links for more details, and to book a ticket while they are still available 🙂

30 November 2017

Technological Resurrection: An idea ripe for discussion

Like it or not, humans are becoming as gods. Where will this trend lead?

How about the ability to bring back to life people who died centuries ago, and whose bodies have long since disintegrated?

That’s the concept of “Technological Resurrection” which is covered in the recent book of the same name by Dallas, Texas based researcher Jonathan A. Jones.

The book carries the subtitle “A thought experiment”. It’s a book that can, indeed, lead readers to experiment with new kinds of thoughts. If you are ready to leave your normal comfort zone behind, you may find a flurry of unexpected ideas emerging in your mind as you dip into its pages. You’re likely also to encounter considerable emotional turmoil en route.

The context

Here’s the context. Technology is putting within human reach more and more of the capabilities that were thought, in former times, to be the preserve of divine beings:

  • We’re not omniscient, but Google has taken us a long way in that direction
  • We’re not yet able to create life at will, but our skills with genomic engineering are proceeding apace
  • Evolution need no longer proceed blindly, via Darwinian Russian roulette, but can benefit from conscious intelligent design (by humans, for humans)
  • Our ability to remake nature is being extended by our ability to remake human nature.
  • We can enable the blind to see, the deaf to hear, and the lame to walk
  • Thanks to medical breakthroughs, we can even bring the dead back to life – that is, the cessation of heart and breath need no longer herald an early grave.

But that’s just the start. It’s plausible that, sooner or later, humanity will create artificial superintelligence with powers that are orders of magnitude greater than anything we currently possess. These enhanced powers would bring humanity even closer to the domain of the gods of bygone legends. These powers might even enable technological resurrection.

Some details

In more detail: Profound new engineering capabilities might become available that can bridge remote sections of space and time – perhaps utilising the entanglement features of quantum physics, perhaps creating and exploiting relativistic “wormholes”, or perhaps involving unimagined novel scientific principles. These bridges might allow selected “copying” of consciousness from just before the moment of death, into refined bodies constructed in the far future ready to receive such consciousness. As Jonathan Jones explores, this copying might take place in ways that circumvent the time travel paradoxes that often feature in science fiction.

That’s a lot of “mights” and “maybes”. However, when contemplating the range of ideas for what might happen to consciousness after physical death, it would be wise to include this option. Beyond our deathbed, we might awaken to find ourselves in a state akin to paradise – surrounded by resurrected family and friends. Born 1945, died 2020, resurrected 2085? Born 1895, died 1917, resurrected 2087?

The book contains a number of speculative short stories to whet readers’ appetites to continue this exploration. These stories add colour to what is already a colourful, imaginative book. The artistic license is grounded in a number of solid references to science, philosophy, psychology, and history. For example, there’s a particularly good section on Russian “cosmist” thinkers. There’s a review of how films and novels have dealt with similar ideas over the decades. And the book is brought up to date with a discussion of contemporary transhumanists, including Ray Kurzweil, Ben Goertzel, Jose Cordeiro, and Giulio Prisco.

Futurists like to ask three questions about forthcoming scenarios. Are they credible (as opposed to being mere flights of fantasy). Are they actionable, in that individual human actions could alter their probability of occurring. And are they desirable.

All three questions get an airing in the pages of the book Jonathan Jones has written. To keep matters short, for now I’ll focus on the third question.

The third question

The idea of technological resurrection could provide much-needed solace, for people whose lives otherwise seem wretched. Perhaps death will cease to be viewed as a one-way ticket to eternal oblivion. What’s more, the world might benefit mightily from a new common quest to advance human capability, safely, beyond the existential perils of modern social angst, towards being able to make technological resurrection a reality. That’s a shared purpose which would help humanity transcend our present-day pettiness. It’s a route to make humanity truly great.

However, from other points of view, the idea of technological resurrection could be viewed as an unhelpful distraction. Similar to how religion was criticised by Karl Marx as being “the opium of the people” – an illusory “pie in the sky when you die” – the vague prospect of technological resurrection could dissuade people from taking important steps to secure or improve long-term health prospects. It might prevent them from:

  • Investigating and arranging cryonics support standby services
  • Channelling funds and resources to those researchers who may be on the point of abolishing aging
  • Encouraging the adoption of health-promoting lifestyles, economic policies, and beneficial diets and supplements
  • Accelerating the roll-out of technoprogressive measures that will raise people around the world out of relative poverty and into relative prosperity.

Finally, the idea of technological resurrection may also fill some minds with dread and foreboding – if they realise that devious, horrible actions from their past, which they believed were secret, might become more widely known by a future superintelligence. If that superintelligence has the inclination to inflict a punitive (hellish) resurrection, well, things gain a different complexion.

There’s a great deal more that deserves to be said about technological resurrection. I’m already thinking of organising some public meetings on this topic. In the meantime, I urge readers to explore the book Jonathan Jones has written. That book serves up its big ideas in ways that are playful, entertaining, and provocative. But the ideas conveyed by the light-hearted text may live in your mind long after you have closed the book.

PS I’ve addressed some of these questions from a different perspective in Chapter 12, “Radical alternatives”, of my own 2016 book “The Abolition of Aging”.

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

5 January 2014

Convictions and actions, 2014 and beyond

In place of new year’s resolutions, I offer five convictions for the future:

First, a conviction of profoundly positive near-term technological possibility. Within a generation – within 20 to 40 years – we could all be living with greatly improved health, intelligence, longevity, vigour, experiences, general well-being, personal autonomy, and social cohesion. The primary driver for this possibility is the acceleration of technological improvement.

In more detail:

  • Over the next decade – by 2025 – there are strong possibilities for numerous breakthroughs in fields such as 3D printing, wearable computing (e.g. Google Glass), synthetic organs, stem cell therapies, brain scanning, smart drugs that enhance consciousness, quantum computing, solar energy, carbon capture and storage, nanomaterials with super-strength and resilience, artificial meat, improved nutrition, rejuvenation biotech, driverless cars, robot automation, AI and Big Data transforming healthcare, improved collaborative decision-making, improved cryonic suspension of people who are biologically dead, and virtual companions (AIs and robots).
  • And going beyond that date towards mid-century, I envision seven “super” trends enabled by technology: trends towards super-materials (the fulfilment of the vision of nanotechnology), super-energy (the vision of abundance), super-health and super-longevity (extension of rejuvenation biotech), super-AI, super-consciousness, and super-connectivity.

Second, however, that greatly improved future state of humanity will require the deep application of many other skills, beyond raw technology, in order to bring it into reality. It will require lots of attention to matters of design, psychology, sociology, economics, philosophy, and politics.

Indeed, without profound attention to human and social matters, over the next 10-20 years, there’s a very real possibility that global society may tear itself apart, under mounting pressures. In the process, this fracturing and conflict could, among lots of other tragic consequences, horribly damage the societal engines for technological progress that are needed to take us forward to the positive future described above. It would bring about new dark ages.

Third, society needs a better calibre of thinking about the future.

Influential figures in politics, the media, academia, and religious movements all too often seem to have a very blinkered view about future possibilities. Or they latch on to just one particular imagining of the future, and treat it as inevitable, losing sight of the wider picture of uncertainties and potentialities.

So that humanity can reach its true potential, in the midst of the likely chaos of the next few decades, politicians and other global leaders need to be focusing on the momentous potential forthcoming transformation of the human condition, rather than the parochial, divisive, and near-term issues that seem to occupy most of their thinking at present.

Fourth, there are plenty of grounds for hope for better thinking about the future. In the midst of the global cacophony of mediocrity and distractedness, there are many voices of insight, vision, and determination. Gradually, a serious study of disruptive future scenarios is emerging. We should all do what we can to accelerate this emergence.

In our study of these disruptive future scenarios, we need to collectively accelerate the process of separating out

  • reality from hype,
  • science fact from science fiction,
  • credible scenarios from wishful thinking,
  • beneficial positive evolution from Hollywood dystopia,
  • human needs from the needs of businesses, corporations, or governments.

Futurism – the serious analysis of future possibilities – isn’t a fixed field. Just as technology improves by a virtuous cycle of feedback involving many participants, who collectively find out which engineering solutions work best for particular product requirements, futurism can improve by a virtuous cycle of feedback involving many participants – both “amateur” and “professional” futurists.

The ongoing process of technological convergence actually makes predictions harder, rather than easier. Small perturbations in one field can have big consequences in adjacent fields. It’s the butterfly effect. What’s more important than specific, fixed predictions is to highlight scenarios that are plausible, explaining why they are plausible, and then to generate debate on the desirability of these scenarios, and on how to enable and accelerate the desirable outcomes.

To help in this, it’s important to be aware of past and present examples of how technology impacts human experience. We need to be able to appreciate the details, and then to try to step back to understand the underlying principles.

Fifth, this is no mere armchair discussion. It’s not an idle speculation. The stakes are really high – and include whether we and our loved ones can be alive, in a state of great health and vitality, in the middle of this century, or whether we will likely have succumbed to decay, disease, division, destruction – and perhaps death.

We can, and should, all make a difference to this outcome. You can make a difference. I can make a difference.

Actions

In line with the above five convictions, I’m working on three large projects over the next six months:

Let me briefly comment on each of these projects.

LF banner narrow

Forthcoming London Futurists event: The Burning Question

The first “real-world” London Futurists meetup in 2014, on Saturday 18th January, is an in-depth analysis of what some people have described as the most complex and threatening issue of the next 10-30 years: accelerated global warming.

Personally I believe, in line with the convictions I listed above, that technology can provide the means to dissolve the threats of accelerated global warming. Carbon capture and storage, along with solar energy, could provide the core of the solution. But these solutions will take time, and we need to take some interim action sooner.

As described by the speaker for the event, writer and consulting editor Duncan Clark,

Tackling global warming will mean persuading the world to abandon oil, coal and gas reserves worth many trillions of dollars – at least until we have the means to put carbon back in the ground. The burning question is whether that can be done. What mix of technology, politics, psychology, and economics might be required? Why aren’t clean energy sources slowing the rate of fossil fuel extraction? Are the energy companies massively overvalued, and how will carbon-cuts affect the global economy? Will we wake up to the threat in time? And who can do what to make it all happen?

For more details and to RSVP, click here.

Note that, due to constraints on the speaker’s time, this event is happening on Saturday evening, rather than in the afternoon.

RSVPs so far are on the light side for this event, but now that the year-end break is behind us, I expect them to ramp up – in view of the extreme importance of this debate.

Forthcoming London Futurists Hangout On Air, with Ramez Naam

One week from today, on the evening of Sunday 12th January, we have our “Hangout on Air” online panel discussion, “Ramez Naam discusses Nexus, Crux, and The Infinite Resource”.

For more details, click here.

Here’s an extract of the event description:

Ramez Naam is arguably one of today’s most interesting and important writers on futurist topics, including both non-fiction and fiction.

  • For example, praise for his Nexus – Mankind gets an upgrade includes:
  • “A superbly plotted high tension technothriller… full of delicious moral ambiguity… a hell of a read.” – Cory Doctorow, Boing Boing
  • “A sharp, chilling look at our likely future.” – Charles Stross
  • “A lightning bolt of a novel. A sense of awe missing from a lot of current fiction.” – Ars Technica.

This London Futurists Hangout on Air will feature a live discussion between Ramez Naam and an international panel of leading futurists: Randal KoeneMichell Zappa, and Giulio Prisco. 

The discussion aims to cover:

  • The science behind the fiction: which elements are strongly grounded in current research, and which elements are more speculative?
  • The philosophy behind the fiction: how should people be responding to the deeply challenging questions that are raised by new technology?
  • Finding a clear path through what has been described as “the best of times and the worst of times” – is human innovation sufficient?
  • What lies next – new books in context.

I’ll add one comment to this description. Over the past week or so, I took the time to listen again to Ramez’s book “Nexus”, and I’m also well through the follow-up, “Crux”. I’m listening to them as audio books, obtained from Audible. Both books are truly engrossing, with a rich array of nuanced characters who undergo several changes in their personal philosophies as events unfold. It also helps that, in each case, the narrators of the audio books are first class.

Another reason I like these books so much is because they’re not afraid to look hard at both good outcomes and bad outcomes of disruptive technological possibility. I unconditionally recommend both books. (With the proviso that they contain some racy, adult material, and therefore may not be suitable for everyone.)

Forthcoming London Futurists Hangout On Air, AI and the end of the human era

I’ll squeeze in mention of one more forthcoming Hangout On Air, happening on Sunday 26th January.

The details are here. An extract follows:

The Hollywood cliché is that artificial intelligence will take over the world. Could this cliché soon become scientific reality, as AI matches then surpasses human intelligence?

Each year AI’s cognitive speed and power doubles; ours does not. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail — human-level intelligence. Scientists argue that AI that advanced will have survival drives much like our own. Can we share the planet with it and survive?

The recently published book Our Final Invention explores how the pursuit of Artificial Intelligence challenges our existence with machines that won’t love us or hate us, but whose indifference could spell our doom. Until now, intelligence has been constrained by the physical limits of its human hosts. What will happen when the brakes come off the most powerful force in the universe?

This London Futurists Hangout on Air will feature a live discussion between the author of Our Final InventionJames Barrat, and an international panel of leading futurists: Jaan TallinnWilliam HertlingCalum Chace, and Peter Rothman.

The main panellist on this occasion, James Barrat, isn’t the only distinguished author on the panel. Calum Chace‘s book “Pandora’s Brain”, which I’ve had the pleasure to read ahead of publication, should go on sale some time later this year. William Hertling is the author of a trilogy of novels

  • Avogadro Corp: The Singularity Is Closer Than It Appears,
  • A.I. Apocalypse,
  • The Last Firewall.

The company Avogadro Corp that features in this trilogy has, let’s say, some features in common with another company named after a large number, i.e. Google. I found all three novels to be easy to read, as well as thought-provoking. Without giving away plot secrets, I can say that the books feature more than one potential route for smarter-than-human general purpose AI to emerge. I recommend them. Start with the first, and see how you get on.

Anticipating 2025

Anticipating Header Star

The near future deserves more of our attention.

A good way to find out about the Anticipating 2025 event is to look at the growing set of “Speaker preview” videos that are available at http://anticipating2025.com/previews/.

You’ll notice that at least some of these videos have captions available, to help people to catch everything the speakers say.

These captions have been produced by a combination of AI and human intelligence:

  • Google provides automatically generated transcripts, from its speech recognition engine, for videos uploaded to YouTube
  • A team of human volunteers works through these transcripts, cleaning them up, before they are published.

My thanks go to everyone involved so far in filming and transcribing the speakers.

Registration for this conference requires payment at time of registration. There are currently nearly 50 people registered, which is a good start (with more than two months to go) towards filling the venue’s capacity of 220.

Early bird registration, for both days, is pegged at £40. I’ll keep early bird registration open until the first 100 tickets have been sold. Afterwards, the price will increase to £50.

Smartphones and beyond

LFS Banner

Here’s a brief introduction to this book:

The smartphone industry has seen both remarkable successes and remarkable failures over the last two decades. Developments have frequently confounded the predictions of apparent expert observers. What does this rich history have to teach analysts, researchers, technology enthusiasts, and activists for other forms of technology adoption and social improvement?

As most regular readers of this blog know, I’ve worked in mobile computing for 25 years. That includes PDAs (personal digital assistants) and smartphones. In these fields, I’ve seen numerous examples of mobile computing becoming more powerful, more useful, and more invisible – becoming a fundamental part of the fabric of society. Smartphone technology which was at one time expected to be used by only a small proportion of the population – the very geeky or the very rich – is now in regular use by over 50% of the population in many countries in the world.

As I saw more and more fields of human interest on the point of being radically transformed by mobile computing and smartphone technology, the question arose in my mind: what’s next? Which other fields of human experience will be transformed by smartphone technology, as it becomes still smaller, more reliable, more affordable, and more powerful? And what about impacts of other kinds of technology?

Taking this one step further: can the processes which have transformed ordinary phones into first smartphones and then superphones be applied, more generally, to transform “ordinary humans” (humans 1.0, if you like), via smart humans or trans humans, into super humans or post humans?

These are the questions which have motivated me to write this book. You can read a longer introduction here.

I’m currently circulating copies of the first twenty chapters for pre-publication review. The chapters available are listed here, with links to the opening paragraphs in each case, and there’s a detailed table of contents here.

As described in the “Downloads” page of the book’s website, please let me know if there are any chapters you’d particularly like to review.

2 November 2012

The future of human enhancement

Is it ethical to put money and resources into trying to develop technological enhancements for human capabilities, when there are so many alternative well-tested mechanisms available to address pressing problems such as social injustice, poverty, poor sanitation, and endemic disease? Is that a failure of priority? Why make a strenuous effort in the hope of allowing an elite few individuals to become “better than well”, courtesy of new technology, when so many people are currently so “less than well”?

These were questions raised by Professor Anne Kerr at a public debate earlier this week at the London School of Economics: The Ethics of Human Enhancement.

The event was described as follows on the LSE website:

This dialogue will consider how issues related to human enhancement fit into the bigger picture of humanity’s future, including the risks and opportunities that will be created by future technological advances. It will question the individualistic logic of human enhancement and consider the social conditions and consequences of enhancement technologies, both real and imagined.

From the stage, Professor Kerr made a number of criticisms of “individualistic logic” (to use the same phrase as in the description of the event). Any human enhancements provided by technology, she suggested, would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present.

She had a lot of worries about technology amplifying existing human flaws:

  • Imagine what might happen if various clever people could take some pill to make themselves even cleverer? It’s well known that clever people often make poor decisions. Their cleverness allows them to construct beguiling sophistry to justify the actions they already want to take. More cleverness could mean even more beguiling sophistry.
  • Or imagine if rapacious bankers could take drugs to boost their workplace stamina and self-serving brainpower – how much more effective they would become at siphoning off public money to their own pockets!
  • Might these risks be addressed by public policy makers, in a way that would allow benefits of new technology, without falling foul of the potential downsides? Again, Professor Kerr was doubtful. In the real world, she said, policy makers cannot operate at that level. They are constrained by shorter-term thinking.

For such reasons, Professor Kerr was opposed to these kinds of technology-driven human enhancements.

When the time for audience Q&A arrived, I felt bound to ask from the floor:

Professor Kerr, would you be in favour of the following examples of human enhancement, assuming they worked?

  1. An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects?
  2. An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner?
  3. And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views?

In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?

The answer came quickly:

No. They would not work. And there are other means of achieving the same effects, including progress of democratisation and education.

I countered: These other methods don’t seem to be working well enough. If I had thought more quickly, I would have raised examples such as society’s collective failure to address the risk of runaway climate change.

Groundwork for this discussion had already been well laid by the other main speaker at the event, Professor Nick Bostrom. You can hear what Professor Bostrom had to say – as well as the full content of the debate – in an audio recording of the event that is available here.

(Small print: I’ve not yet taken the time to review the contents of this recording. My description in this blogpost of some of the verbal exchanges inevitably paraphrases and extrapolates what was actually said. I apologise in advance for any mis-representation, but I believe my summary to be faithful to the spirit of the discussion, if not to the actual words used.)

Professor Bostrom started the debate by mentioning that the question of human enhancement is a big subject. It can be approached from a shorter-term policy perspective: what rules should governments set, to constrain the development and application of technological enhancements, such as genetic engineering, neuro-engineering, smart drugs, synthetic biology, nanotechnology, and artificial general intelligence? It can also be approached from the angle of envisioning larger human potential, that would enable the best possible future for human civilisation. Sadly, much of the discussion at the LSE got bogged down in the shorter-term question, and lost sight of the grander accomplishments that human enhancements could bring.

Professor Bostrom had an explanation for this lack of sustained interest in these larger possibilities: the technologies for human enhancement that are currently available do not work that well:

  • Some drugs give cyclists or sprinters an incremental advantage over their competitors, but the people who take these drugs still need to train exceptionally hard, to reach the pinnacle of their performance
  • Other drugs seem to allow students to concentrate better over periods of time, but their effects aren’t particularly outstanding, and it’s possible that methods such as good diet, adequate rest, and meditation, have results that are at least as significant
  • Genetic selection can reduce the risk of implanted embryos developing various diseases that have strong genetic links, but so far, there is no clear evidence that genetic selection can result in babies with abilities higher than the general human range.

This lack of evidence of strong tangible results is one reason why Professor Kerr was able to reply so quickly to my suggestion about the three kinds of technological enhancements, saying these enhancements would not work.

However, I would still like to press they question: what if they did work? Would we want to encourage them in that case?

A recent article in the Philosophy Now journal takes the argument one step further. The article was co-authored by Professors Julian Savulescu and Ingmar Persson, and draws material from their book “Unfit for the Future: The Need for Moral Enhancement”.

To quote from the Philosophy Now article:

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.

We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…

In short, the argument of Professors Savulescu and Persson is not just that we should allow the development of technology that can enhance human reasoning and moral awareness, but that we must strongly encourage it. Failure to do so would be to commit a grave error of omission.

These arguments about moral imperative – what technologies should we allow to be developed, or indeed encourage to be developed – are in turn strongly influenced by our beliefs about what technologies are possible. It’s clear to me that many people in positions of authority in society – including academics as well as politicians – are woefully unaware about realistic technology possibilities. People are familiar with various ideas as a result of science fiction novels and movies, but it’s a different matter to know the division between “this is an interesting work of fiction” and “this is a credible future that might arise within the next generation”.

What’s more, when it comes to people forecasting the likely progress of technological possibilities, I see a lot of evidence in favour of the observation made by Roy Amara, long-time president of the Institute for the Future:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

What about the technologies mentioned by Professors Savulescu and Persson? What impact will be possible from smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process? In the short term, probably less than many of us hope; in the longer term, probably more than most of us expect.

In this context, what is the “longer term”? That’s the harder question!

But the quest to address this kind of question, and then to share the answers widely, is the reason I have been keen to support the growth of the London Futurist meetup, by organising a series of discussion meetings with well-informed futurist speakers. Happily, membership has been on the up-and-up, reaching nearly 900 by the end of October.

The London Futurist event happening this weekend – on the afternoon of Saturday 3rd November – picks up the theme of enhancing our mental abilities. The title is “Hacking our wetware: smart drugs and beyond – with Andrew Vladimirov”:

What are the most promising methods to enhance human mental and intellectual abilities significantly beyond the so-called physiological norm? Which specific brain mechanisms should be targeted, and how?  Which aspects of wetware hacking are likely to grow in prominence in the not-too-distant future?

By reviewing a variety of fascinating experimental findings, this talk will explore:

  • various pharmacological methods, taking into account fundamental differences in Eastern and Western approaches to the development and use of nootropics
  • the potential of non-invasive neuro-stimulation using CES (Cranial Electrotherapy Stimulation) and TMS (Transcranial Magnetic Stimulation)
  • data suggesting the possibility to “awaken” savant-like skills in healthy humans without paying the price of autism
  • apparent means to stimulate seemingly paranormal abilities and transcendental experiences
  • potential genetic engineering perspectives, aiming towards human cognition enhancement.

The advance number of positive RSVPs for this talk, as recorded on the London Futurist meetup site, has reached 129 at the time of writing – which is already a record.

(From my observations, I have developed the rule of thumb that the number of people who actually turn up for a meeting is something like 60%-75% of the number of positive RSVPs.)

I’ll finish by returning to the question posed at the beginning of my posting:

  • Are these technological enhancements likely to increase human inequality (by benefiting only a small number of users),
  • Or are they instead likely to drop in price and grow in availability (the same as happened, for example, with smartphones, Internet access, and many other items of technology)?

My answer – which I believe is shared by Professor Bostrom – is that things could still go either way. That’s why we need to think hard about their development and application, ahead of time. That way, we’ll become better informed to help influence the outcome.

29 July 2011

Towards a mind-stretching weekend in New York

Filed under: AGI, futurist, leadership, nanotechnology, robots, Singularity — David Wood @ 9:19 pm

I’ve attended the annual Singularity Summit twice before – in 2008 and in 2009.  I’ve just registered to attend the 2011 event, which is taking place in New York on 15th-16th October.  Here’s why.

On both previous occasions, the summits featured presentations that gave me a great deal to think about, on arguably some of the most significant topics in human history.  These topics include the potential emergence, within the lifetimes of many people alive today, of:

  • Artificial intelligence which far exceeds the capabilities of even the smartest group of humans
  • Robots which far exceed the dexterity, balance, speed, strength, and sensory powers of even the best human athletes, sportspeople, or soldiers
  • Super-small nanobots which can enter the human body and effect far more thorough repairs and enhancements – to both body and mind – than even the best current medical techniques.

True, at the previous events, there were some poor presentations too – which is probably inevitable given the risky cutting-edge nature of the topics being covered.  But the better presentations far outweighed the worse ones.

And as well as the presentations, I greatly enjoyed the networking with the unusual mix of attendees – people who had taken the time to explore many of the fascinating hinterlands of modern technology trends.  If someone is open-minded enough to give serious thought to the ideas listed above, they’re often open-minded enough to entertain lots of other unconventional ideas too.  I frequently found myself in disagreement with these attendees, but the debate was deeply refreshing.

Take a look at the list of confirmed speakers so far: which of these people would you most like to bounce ideas off?

The summit registration page is now open.  As I type these words, that page states that the cost of tickets is going to increase after 31 July.  That’s an argument for registering sooner rather than later.

To provide more information, here’s a copy of the press release for the event:

Singularity Summit 2011 in New York City to Explore Watson Victory in Jeopardy

New York, NY This October 15-16th in New York City, a TED-style conference gathering innovators from science, industry, and the public will discuss IBM’s ‘Watson’ computer and other exciting developments in emerging technologies. Keynote speakers at Singularity Summit 2011 include Jeopardy! champion Ken Jennings and famed futurist and inventor Ray Kurzweil. After losing to an IBM computer in Jeopardy!, Jennings wrote, “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

In February, Watson defeated two human champions in Jeopardy!, the game show famous for its mind-bending trivia questions. Surprising millions of TV viewers, Watson took down champions Ken Jennings and Brad Rutter for the $1 million first prize. Facing defeat on the final show, competitor Ken Jennings jokingly wrote in parentheses on his last answer: “I for one welcome our new computer overlords.” Besides Watson, the Singularity Summit 2011 will feature speakers on robotics, nanotechnology, biotechnology, futurism, and other cutting-edge technologies, and is the only conference to focus on the technological Singularity.

Responding to Watson’s victory, leading computer scientist Ray Kurzweil said, “Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.” In Kurzweil’s view, the combination of language understanding and pattern recognition that Watson displays would make its descendants “far superior to a human”. Kurzweil is known for predicting computers whose conversations will be indistinguishable from people by 2029.

Beyond artificial intelligence, the Singularity Summit will also focus on high-tech and where it is going. Economist Tyler Cowen will examine the economic impacts of emerging technologies. Cowen argued in his recent book The Great Stagnation that modern society is on a technological plateau where “a lot of our major innovations are springing up in sectors where a lot of work is done by machines, not by human beings.” Tech entrepreneur and investor Peter Thiel, who sits on the board of directors of Facebook, will share his thoughts on innovation and jumpstarting the economy.

Other speakers include MIT cosmologist Max Tegmark, Allen Brain Institute chief scientist Christof Koch, co-founder of Skype Jaan Tallinn, robotics professors James McLurkin and Robin Murphy, Bionic Builders host Casey Pieretti, the MIT Media Lab’s Riley Crane, MIT polymath Alexander Wissner-Gross, filmmaker and television personality Jason Silva, and Singularity Institute artificial intelligence researcher Eliezer Yudkowsky.

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

Older Posts »

Blog at WordPress.com.