dw2

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

5 January 2014

Convictions and actions, 2014 and beyond

In place of new year’s resolutions, I offer five convictions for the future:

First, a conviction of profoundly positive near-term technological possibility. Within a generation – within 20 to 40 years – we could all be living with greatly improved health, intelligence, longevity, vigour, experiences, general well-being, personal autonomy, and social cohesion. The primary driver for this possibility is the acceleration of technological improvement.

In more detail:

  • Over the next decade – by 2025 – there are strong possibilities for numerous breakthroughs in fields such as 3D printing, wearable computing (e.g. Google Glass), synthetic organs, stem cell therapies, brain scanning, smart drugs that enhance consciousness, quantum computing, solar energy, carbon capture and storage, nanomaterials with super-strength and resilience, artificial meat, improved nutrition, rejuvenation biotech, driverless cars, robot automation, AI and Big Data transforming healthcare, improved collaborative decision-making, improved cryonic suspension of people who are biologically dead, and virtual companions (AIs and robots).
  • And going beyond that date towards mid-century, I envision seven “super” trends enabled by technology: trends towards super-materials (the fulfilment of the vision of nanotechnology), super-energy (the vision of abundance), super-health and super-longevity (extension of rejuvenation biotech), super-AI, super-consciousness, and super-connectivity.

Second, however, that greatly improved future state of humanity will require the deep application of many other skills, beyond raw technology, in order to bring it into reality. It will require lots of attention to matters of design, psychology, sociology, economics, philosophy, and politics.

Indeed, without profound attention to human and social matters, over the next 10-20 years, there’s a very real possibility that global society may tear itself apart, under mounting pressures. In the process, this fracturing and conflict could, among lots of other tragic consequences, horribly damage the societal engines for technological progress that are needed to take us forward to the positive future described above. It would bring about new dark ages.

Third, society needs a better calibre of thinking about the future.

Influential figures in politics, the media, academia, and religious movements all too often seem to have a very blinkered view about future possibilities. Or they latch on to just one particular imagining of the future, and treat it as inevitable, losing sight of the wider picture of uncertainties and potentialities.

So that humanity can reach its true potential, in the midst of the likely chaos of the next few decades, politicians and other global leaders need to be focusing on the momentous potential forthcoming transformation of the human condition, rather than the parochial, divisive, and near-term issues that seem to occupy most of their thinking at present.

Fourth, there are plenty of grounds for hope for better thinking about the future. In the midst of the global cacophony of mediocrity and distractedness, there are many voices of insight, vision, and determination. Gradually, a serious study of disruptive future scenarios is emerging. We should all do what we can to accelerate this emergence.

In our study of these disruptive future scenarios, we need to collectively accelerate the process of separating out

  • reality from hype,
  • science fact from science fiction,
  • credible scenarios from wishful thinking,
  • beneficial positive evolution from Hollywood dystopia,
  • human needs from the needs of businesses, corporations, or governments.

Futurism – the serious analysis of future possibilities – isn’t a fixed field. Just as technology improves by a virtuous cycle of feedback involving many participants, who collectively find out which engineering solutions work best for particular product requirements, futurism can improve by a virtuous cycle of feedback involving many participants – both “amateur” and “professional” futurists.

The ongoing process of technological convergence actually makes predictions harder, rather than easier. Small perturbations in one field can have big consequences in adjacent fields. It’s the butterfly effect. What’s more important than specific, fixed predictions is to highlight scenarios that are plausible, explaining why they are plausible, and then to generate debate on the desirability of these scenarios, and on how to enable and accelerate the desirable outcomes.

To help in this, it’s important to be aware of past and present examples of how technology impacts human experience. We need to be able to appreciate the details, and then to try to step back to understand the underlying principles.

Fifth, this is no mere armchair discussion. It’s not an idle speculation. The stakes are really high – and include whether we and our loved ones can be alive, in a state of great health and vitality, in the middle of this century, or whether we will likely have succumbed to decay, disease, division, destruction – and perhaps death.

We can, and should, all make a difference to this outcome. You can make a difference. I can make a difference.

Actions

In line with the above five convictions, I’m working on three large projects over the next six months:

Let me briefly comment on each of these projects.

LF banner narrow

Forthcoming London Futurists event: The Burning Question

The first “real-world” London Futurists meetup in 2014, on Saturday 18th January, is an in-depth analysis of what some people have described as the most complex and threatening issue of the next 10-30 years: accelerated global warming.

Personally I believe, in line with the convictions I listed above, that technology can provide the means to dissolve the threats of accelerated global warming. Carbon capture and storage, along with solar energy, could provide the core of the solution. But these solutions will take time, and we need to take some interim action sooner.

As described by the speaker for the event, writer and consulting editor Duncan Clark,

Tackling global warming will mean persuading the world to abandon oil, coal and gas reserves worth many trillions of dollars – at least until we have the means to put carbon back in the ground. The burning question is whether that can be done. What mix of technology, politics, psychology, and economics might be required? Why aren’t clean energy sources slowing the rate of fossil fuel extraction? Are the energy companies massively overvalued, and how will carbon-cuts affect the global economy? Will we wake up to the threat in time? And who can do what to make it all happen?

For more details and to RSVP, click here.

Note that, due to constraints on the speaker’s time, this event is happening on Saturday evening, rather than in the afternoon.

RSVPs so far are on the light side for this event, but now that the year-end break is behind us, I expect them to ramp up – in view of the extreme importance of this debate.

Forthcoming London Futurists Hangout On Air, with Ramez Naam

One week from today, on the evening of Sunday 12th January, we have our “Hangout on Air” online panel discussion, “Ramez Naam discusses Nexus, Crux, and The Infinite Resource”.

For more details, click here.

Here’s an extract of the event description:

Ramez Naam is arguably one of today’s most interesting and important writers on futurist topics, including both non-fiction and fiction.

  • For example, praise for his Nexus – Mankind gets an upgrade includes:
  • “A superbly plotted high tension technothriller… full of delicious moral ambiguity… a hell of a read.” – Cory Doctorow, Boing Boing
  • “A sharp, chilling look at our likely future.” – Charles Stross
  • “A lightning bolt of a novel. A sense of awe missing from a lot of current fiction.” – Ars Technica.

This London Futurists Hangout on Air will feature a live discussion between Ramez Naam and an international panel of leading futurists: Randal KoeneMichell Zappa, and Giulio Prisco. 

The discussion aims to cover:

  • The science behind the fiction: which elements are strongly grounded in current research, and which elements are more speculative?
  • The philosophy behind the fiction: how should people be responding to the deeply challenging questions that are raised by new technology?
  • Finding a clear path through what has been described as “the best of times and the worst of times” – is human innovation sufficient?
  • What lies next – new books in context.

I’ll add one comment to this description. Over the past week or so, I took the time to listen again to Ramez’s book “Nexus”, and I’m also well through the follow-up, “Crux”. I’m listening to them as audio books, obtained from Audible. Both books are truly engrossing, with a rich array of nuanced characters who undergo several changes in their personal philosophies as events unfold. It also helps that, in each case, the narrators of the audio books are first class.

Another reason I like these books so much is because they’re not afraid to look hard at both good outcomes and bad outcomes of disruptive technological possibility. I unconditionally recommend both books. (With the proviso that they contain some racy, adult material, and therefore may not be suitable for everyone.)

Forthcoming London Futurists Hangout On Air, AI and the end of the human era

I’ll squeeze in mention of one more forthcoming Hangout On Air, happening on Sunday 26th January.

The details are here. An extract follows:

The Hollywood cliché is that artificial intelligence will take over the world. Could this cliché soon become scientific reality, as AI matches then surpasses human intelligence?

Each year AI’s cognitive speed and power doubles; ours does not. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail — human-level intelligence. Scientists argue that AI that advanced will have survival drives much like our own. Can we share the planet with it and survive?

The recently published book Our Final Invention explores how the pursuit of Artificial Intelligence challenges our existence with machines that won’t love us or hate us, but whose indifference could spell our doom. Until now, intelligence has been constrained by the physical limits of its human hosts. What will happen when the brakes come off the most powerful force in the universe?

This London Futurists Hangout on Air will feature a live discussion between the author of Our Final InventionJames Barrat, and an international panel of leading futurists: Jaan TallinnWilliam HertlingCalum Chace, and Peter Rothman.

The main panellist on this occasion, James Barrat, isn’t the only distinguished author on the panel. Calum Chace‘s book “Pandora’s Brain”, which I’ve had the pleasure to read ahead of publication, should go on sale some time later this year. William Hertling is the author of a trilogy of novels

  • Avogadro Corp: The Singularity Is Closer Than It Appears,
  • A.I. Apocalypse,
  • The Last Firewall.

The company Avogadro Corp that features in this trilogy has, let’s say, some features in common with another company named after a large number, i.e. Google. I found all three novels to be easy to read, as well as thought-provoking. Without giving away plot secrets, I can say that the books feature more than one potential route for smarter-than-human general purpose AI to emerge. I recommend them. Start with the first, and see how you get on.

Anticipating 2025

Anticipating Header Star

The near future deserves more of our attention.

A good way to find out about the Anticipating 2025 event is to look at the growing set of “Speaker preview” videos that are available at http://anticipating2025.com/previews/.

You’ll notice that at least some of these videos have captions available, to help people to catch everything the speakers say.

These captions have been produced by a combination of AI and human intelligence:

  • Google provides automatically generated transcripts, from its speech recognition engine, for videos uploaded to YouTube
  • A team of human volunteers works through these transcripts, cleaning them up, before they are published.

My thanks go to everyone involved so far in filming and transcribing the speakers.

Registration for this conference requires payment at time of registration. There are currently nearly 50 people registered, which is a good start (with more than two months to go) towards filling the venue’s capacity of 220.

Early bird registration, for both days, is pegged at £40. I’ll keep early bird registration open until the first 100 tickets have been sold. Afterwards, the price will increase to £50.

Smartphones and beyond

LFS Banner

Here’s a brief introduction to this book:

The smartphone industry has seen both remarkable successes and remarkable failures over the last two decades. Developments have frequently confounded the predictions of apparent expert observers. What does this rich history have to teach analysts, researchers, technology enthusiasts, and activists for other forms of technology adoption and social improvement?

As most regular readers of this blog know, I’ve worked in mobile computing for 25 years. That includes PDAs (personal digital assistants) and smartphones. In these fields, I’ve seen numerous examples of mobile computing becoming more powerful, more useful, and more invisible – becoming a fundamental part of the fabric of society. Smartphone technology which was at one time expected to be used by only a small proportion of the population – the very geeky or the very rich – is now in regular use by over 50% of the population in many countries in the world.

As I saw more and more fields of human interest on the point of being radically transformed by mobile computing and smartphone technology, the question arose in my mind: what’s next? Which other fields of human experience will be transformed by smartphone technology, as it becomes still smaller, more reliable, more affordable, and more powerful? And what about impacts of other kinds of technology?

Taking this one step further: can the processes which have transformed ordinary phones into first smartphones and then superphones be applied, more generally, to transform “ordinary humans” (humans 1.0, if you like), via smart humans or trans humans, into super humans or post humans?

These are the questions which have motivated me to write this book. You can read a longer introduction here.

I’m currently circulating copies of the first twenty chapters for pre-publication review. The chapters available are listed here, with links to the opening paragraphs in each case, and there’s a detailed table of contents here.

As described in the “Downloads” page of the book’s website, please let me know if there are any chapters you’d particularly like to review.

2 November 2012

The future of human enhancement

Is it ethical to put money and resources into trying to develop technological enhancements for human capabilities, when there are so many alternative well-tested mechanisms available to address pressing problems such as social injustice, poverty, poor sanitation, and endemic disease? Is that a failure of priority? Why make a strenuous effort in the hope of allowing an elite few individuals to become “better than well”, courtesy of new technology, when so many people are currently so “less than well”?

These were questions raised by Professor Anne Kerr at a public debate earlier this week at the London School of Economics: The Ethics of Human Enhancement.

The event was described as follows on the LSE website:

This dialogue will consider how issues related to human enhancement fit into the bigger picture of humanity’s future, including the risks and opportunities that will be created by future technological advances. It will question the individualistic logic of human enhancement and consider the social conditions and consequences of enhancement technologies, both real and imagined.

From the stage, Professor Kerr made a number of criticisms of “individualistic logic” (to use the same phrase as in the description of the event). Any human enhancements provided by technology, she suggested, would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present.

She had a lot of worries about technology amplifying existing human flaws:

  • Imagine what might happen if various clever people could take some pill to make themselves even cleverer? It’s well known that clever people often make poor decisions. Their cleverness allows them to construct beguiling sophistry to justify the actions they already want to take. More cleverness could mean even more beguiling sophistry.
  • Or imagine if rapacious bankers could take drugs to boost their workplace stamina and self-serving brainpower – how much more effective they would become at siphoning off public money to their own pockets!
  • Might these risks be addressed by public policy makers, in a way that would allow benefits of new technology, without falling foul of the potential downsides? Again, Professor Kerr was doubtful. In the real world, she said, policy makers cannot operate at that level. They are constrained by shorter-term thinking.

For such reasons, Professor Kerr was opposed to these kinds of technology-driven human enhancements.

When the time for audience Q&A arrived, I felt bound to ask from the floor:

Professor Kerr, would you be in favour of the following examples of human enhancement, assuming they worked?

  1. An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects?
  2. An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner?
  3. And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views?

In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?

The answer came quickly:

No. They would not work. And there are other means of achieving the same effects, including progress of democratisation and education.

I countered: These other methods don’t seem to be working well enough. If I had thought more quickly, I would have raised examples such as society’s collective failure to address the risk of runaway climate change.

Groundwork for this discussion had already been well laid by the other main speaker at the event, Professor Nick Bostrom. You can hear what Professor Bostrom had to say – as well as the full content of the debate – in an audio recording of the event that is available here.

(Small print: I’ve not yet taken the time to review the contents of this recording. My description in this blogpost of some of the verbal exchanges inevitably paraphrases and extrapolates what was actually said. I apologise in advance for any mis-representation, but I believe my summary to be faithful to the spirit of the discussion, if not to the actual words used.)

Professor Bostrom started the debate by mentioning that the question of human enhancement is a big subject. It can be approached from a shorter-term policy perspective: what rules should governments set, to constrain the development and application of technological enhancements, such as genetic engineering, neuro-engineering, smart drugs, synthetic biology, nanotechnology, and artificial general intelligence? It can also be approached from the angle of envisioning larger human potential, that would enable the best possible future for human civilisation. Sadly, much of the discussion at the LSE got bogged down in the shorter-term question, and lost sight of the grander accomplishments that human enhancements could bring.

Professor Bostrom had an explanation for this lack of sustained interest in these larger possibilities: the technologies for human enhancement that are currently available do not work that well:

  • Some drugs give cyclists or sprinters an incremental advantage over their competitors, but the people who take these drugs still need to train exceptionally hard, to reach the pinnacle of their performance
  • Other drugs seem to allow students to concentrate better over periods of time, but their effects aren’t particularly outstanding, and it’s possible that methods such as good diet, adequate rest, and meditation, have results that are at least as significant
  • Genetic selection can reduce the risk of implanted embryos developing various diseases that have strong genetic links, but so far, there is no clear evidence that genetic selection can result in babies with abilities higher than the general human range.

This lack of evidence of strong tangible results is one reason why Professor Kerr was able to reply so quickly to my suggestion about the three kinds of technological enhancements, saying these enhancements would not work.

However, I would still like to press they question: what if they did work? Would we want to encourage them in that case?

A recent article in the Philosophy Now journal takes the argument one step further. The article was co-authored by Professors Julian Savulescu and Ingmar Persson, and draws material from their book “Unfit for the Future: The Need for Moral Enhancement”.

To quote from the Philosophy Now article:

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.

We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…

In short, the argument of Professors Savulescu and Persson is not just that we should allow the development of technology that can enhance human reasoning and moral awareness, but that we must strongly encourage it. Failure to do so would be to commit a grave error of omission.

These arguments about moral imperative – what technologies should we allow to be developed, or indeed encourage to be developed – are in turn strongly influenced by our beliefs about what technologies are possible. It’s clear to me that many people in positions of authority in society – including academics as well as politicians – are woefully unaware about realistic technology possibilities. People are familiar with various ideas as a result of science fiction novels and movies, but it’s a different matter to know the division between “this is an interesting work of fiction” and “this is a credible future that might arise within the next generation”.

What’s more, when it comes to people forecasting the likely progress of technological possibilities, I see a lot of evidence in favour of the observation made by Roy Amara, long-time president of the Institute for the Future:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

What about the technologies mentioned by Professors Savulescu and Persson? What impact will be possible from smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process? In the short term, probably less than many of us hope; in the longer term, probably more than most of us expect.

In this context, what is the “longer term”? That’s the harder question!

But the quest to address this kind of question, and then to share the answers widely, is the reason I have been keen to support the growth of the London Futurist meetup, by organising a series of discussion meetings with well-informed futurist speakers. Happily, membership has been on the up-and-up, reaching nearly 900 by the end of October.

The London Futurist event happening this weekend – on the afternoon of Saturday 3rd November – picks up the theme of enhancing our mental abilities. The title is “Hacking our wetware: smart drugs and beyond – with Andrew Vladimirov”:

What are the most promising methods to enhance human mental and intellectual abilities significantly beyond the so-called physiological norm? Which specific brain mechanisms should be targeted, and how?  Which aspects of wetware hacking are likely to grow in prominence in the not-too-distant future?

By reviewing a variety of fascinating experimental findings, this talk will explore:

  • various pharmacological methods, taking into account fundamental differences in Eastern and Western approaches to the development and use of nootropics
  • the potential of non-invasive neuro-stimulation using CES (Cranial Electrotherapy Stimulation) and TMS (Transcranial Magnetic Stimulation)
  • data suggesting the possibility to “awaken” savant-like skills in healthy humans without paying the price of autism
  • apparent means to stimulate seemingly paranormal abilities and transcendental experiences
  • potential genetic engineering perspectives, aiming towards human cognition enhancement.

The advance number of positive RSVPs for this talk, as recorded on the London Futurist meetup site, has reached 129 at the time of writing – which is already a record.

(From my observations, I have developed the rule of thumb that the number of people who actually turn up for a meeting is something like 60%-75% of the number of positive RSVPs.)

I’ll finish by returning to the question posed at the beginning of my posting:

  • Are these technological enhancements likely to increase human inequality (by benefiting only a small number of users),
  • Or are they instead likely to drop in price and grow in availability (the same as happened, for example, with smartphones, Internet access, and many other items of technology)?

My answer – which I believe is shared by Professor Bostrom – is that things could still go either way. That’s why we need to think hard about their development and application, ahead of time. That way, we’ll become better informed to help influence the outcome.

29 July 2011

Towards a mind-stretching weekend in New York

Filed under: AGI, futurist, leadership, nanotechnology, robots, Singularity — David Wood @ 9:19 pm

I’ve attended the annual Singularity Summit twice before – in 2008 and in 2009.  I’ve just registered to attend the 2011 event, which is taking place in New York on 15th-16th October.  Here’s why.

On both previous occasions, the summits featured presentations that gave me a great deal to think about, on arguably some of the most significant topics in human history.  These topics include the potential emergence, within the lifetimes of many people alive today, of:

  • Artificial intelligence which far exceeds the capabilities of even the smartest group of humans
  • Robots which far exceed the dexterity, balance, speed, strength, and sensory powers of even the best human athletes, sportspeople, or soldiers
  • Super-small nanobots which can enter the human body and effect far more thorough repairs and enhancements – to both body and mind – than even the best current medical techniques.

True, at the previous events, there were some poor presentations too – which is probably inevitable given the risky cutting-edge nature of the topics being covered.  But the better presentations far outweighed the worse ones.

And as well as the presentations, I greatly enjoyed the networking with the unusual mix of attendees – people who had taken the time to explore many of the fascinating hinterlands of modern technology trends.  If someone is open-minded enough to give serious thought to the ideas listed above, they’re often open-minded enough to entertain lots of other unconventional ideas too.  I frequently found myself in disagreement with these attendees, but the debate was deeply refreshing.

Take a look at the list of confirmed speakers so far: which of these people would you most like to bounce ideas off?

The summit registration page is now open.  As I type these words, that page states that the cost of tickets is going to increase after 31 July.  That’s an argument for registering sooner rather than later.

To provide more information, here’s a copy of the press release for the event:

Singularity Summit 2011 in New York City to Explore Watson Victory in Jeopardy

New York, NY This October 15-16th in New York City, a TED-style conference gathering innovators from science, industry, and the public will discuss IBM’s ‘Watson’ computer and other exciting developments in emerging technologies. Keynote speakers at Singularity Summit 2011 include Jeopardy! champion Ken Jennings and famed futurist and inventor Ray Kurzweil. After losing to an IBM computer in Jeopardy!, Jennings wrote, “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

In February, Watson defeated two human champions in Jeopardy!, the game show famous for its mind-bending trivia questions. Surprising millions of TV viewers, Watson took down champions Ken Jennings and Brad Rutter for the $1 million first prize. Facing defeat on the final show, competitor Ken Jennings jokingly wrote in parentheses on his last answer: “I for one welcome our new computer overlords.” Besides Watson, the Singularity Summit 2011 will feature speakers on robotics, nanotechnology, biotechnology, futurism, and other cutting-edge technologies, and is the only conference to focus on the technological Singularity.

Responding to Watson’s victory, leading computer scientist Ray Kurzweil said, “Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.” In Kurzweil’s view, the combination of language understanding and pattern recognition that Watson displays would make its descendants “far superior to a human”. Kurzweil is known for predicting computers whose conversations will be indistinguishable from people by 2029.

Beyond artificial intelligence, the Singularity Summit will also focus on high-tech and where it is going. Economist Tyler Cowen will examine the economic impacts of emerging technologies. Cowen argued in his recent book The Great Stagnation that modern society is on a technological plateau where “a lot of our major innovations are springing up in sectors where a lot of work is done by machines, not by human beings.” Tech entrepreneur and investor Peter Thiel, who sits on the board of directors of Facebook, will share his thoughts on innovation and jumpstarting the economy.

Other speakers include MIT cosmologist Max Tegmark, Allen Brain Institute chief scientist Christof Koch, co-founder of Skype Jaan Tallinn, robotics professors James McLurkin and Robin Murphy, Bionic Builders host Casey Pieretti, the MIT Media Lab’s Riley Crane, MIT polymath Alexander Wissner-Gross, filmmaker and television personality Jason Silva, and Singularity Institute artificial intelligence researcher Eliezer Yudkowsky.

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

8 April 2010

Video: The case for Artificial General Intelligence

Filed under: AGI, flight, Humanity Plus, Moore's Law, presentation, YouTube — David Wood @ 11:19 am

Here’s another short (<10 minute) video from me, building on one of the topics I’ve listed in the Humanity+ Agenda: the case for artificial general intelligence (AGI).

The discipline of having to fit a set of thoughts into a ten minute video is a good one!

Further reading: I’ve covered some of the same topics, in more depth, in previous blogposts, including:

For anyone who prefers to read the material as text, I append an approximate transcript.

My name is David Wood.  I’m going to cover some reasons for paying more attention to Artificial General Intelligence, AGI, – also known as super-human machine intelligence.  This field deserves significantly more analysis, resourcing, and funding, over the coming decade.

Machines with super-human levels of general intelligence will include hardware and software, as part of a network of connected intelligence.  Their task will be to analyse huge amounts of data, review hypotheses about this data, discern patterns, propose new hypotheses, propose experiments which will provide valuable new data, and in this way, recommend actions to solve problems or take advantage of opportunities.

If that sounds too general, I’ll have some specific examples in a moment, but the point is to create a reasoning system that is, indeed, applicable to a wide range of problems.  That’s why it’s called Artificial General Intelligence.

In this way, these machines will provide a powerful supplement to existing human reasoning.

Here are some of the deep human problems that could benefit from the assistance of enormous silicon super-brains:

  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What are the causes of different diseases – and how can we cure them?
  • Can we predict earthquakes– and even prevent them?
  • Are there safe geo-engineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • Which existential risks – risks that could drastically impact human civilisation – deserve the most attention?

You get the idea.  I’m sure you could add some of your own favourite questions to this list.

Some people may say that this is an unrealistic vision.  So, in answer, let me spell out the factors I see as enabling this kind of super-intelligence within the next few decades.  First is the accelerating pace of improvements in computer hardware.

This chart is from University of London researcher Shane Legg.  On a log-axis, it shows the exponentially increasing power of super-computers, all the way from 1960 to the present day and beyond.  It shows FLOPS – the number of floating point operations per second that a computer can do.  It goes all the way from kiloflops through megaflops, gigaflops, teraflops, petaflops, and is pointing towards exaflops.  If this trend continues, we’ll soon have supercomputers with at least as much computational power as a human brain.  Perhaps within less than 20 years.

But will this trend continue?  Of course, there are often slowdowns in technological progress.  Skyscraper heights and the speeds of passenger airlines are two examples.  The slowdown can sometimes be for intrinsic technical difficulties, but is more often because of lack of sufficient customer interest or public interest in even bigger or faster products.  After all, the technical skills that took mankind to the moon in 1969 could have taken us to Mars long before now, if there had been sufficient continuing public interest.

Specifically, in the case of Moore’s Law for exponentially increasing hardware power, industry experts from companies like Intel state that they can foresee at least 10 more years’ continuation of this trend, and there have plenty of ideas for innovative techniques to extend it even further.  It comes down to two things:

  • Is there sufficient public motivation in continuing this work?
  • And can some associated system integration issues be solved?

Mention of system issues brings me back to the list of factors enabling major progress with super-intelligence.  Next is improvement with software.  There’s lots of scope here.  There’s also additional power from networking ever larger numbers of computer together.  Another factor is the ever-increasing number of people with engineering skills, around the world, who are able to contribute to this area.  We have more and more graduates in relevant topics all the time.  Provided they can work together constructively, the rate of progress should increase.  We can also learn more about the structure of intelligence by analysing biological brains at ever finer levels of detail – by scanning and model-building.  Last, but not least, we have the question of motivation.

As an example of the difference that a big surge in motivation can make, consider the example of progress with another grand, historical engineering challenge – powered flight.

This example comes from Artificial Intelligence researcher J Storr Halls in his book “Beyond AI”.  People who had ideas about powered flight were, for centuries, regarded as cranks and madmen – a bit like people who, in our present day, have ideas about superhuman machine intelligence.  Finally, after many false starts, the Wright brothers made the necessary engineering breakthroughs at the start of the last century.  But even after they first flew, the field of aircraft engineering remained a sleepy backwater for five more years, while the Wright brothers kept quiet about their work and secured patent protection.  They did some sensational public demos in 1908, in Paris and in America.  Overnight, aviation went from a screwball hobby to the rage of the age and kept that status for decades.  Huge public interest drove remarkable developments.  It will be the same with demonstrated breakthroughs with artificial general intelligence.

Indeed, the motivation for studying artificial intelligence is growing all the time.  In addition to the deep human problems I mentioned earlier, we have a range of commercially-significant motivations that will drive business interest in this area.  This includes ongoing improvements in search, language translation, intelligent user interfaces, games design, and spam detection systems – where there’s already a rapid “arms race” between writers of ever more intelligent “bots” and people who seek to detect and neutralise these bots.

AGI is also commercially important to reduce costs from support call systems, and to make robots more appealing in a wide variety of contexts.  Some people will be motivated to study AGI for more philosophical reasons, such as to research ideas about minds and consciousness, to explore the possibility of uploading human consciousness into computer systems, and for the sheer joy of creating new life forms.  Last, there’s also the powerful driver that if you think a competitor may be near to a breakthrough in this area, you’re more likely to redouble your efforts.  That adds up to a lot of motivation.

To put this on a diagram:

  • We have increasing awareness of human-level reasons for developing AGI.
  • We also have maturing sub-components for AGI, including improved algorithms, improved models of the mind, and improved hardware.
  • With the Internet and open collaboration, we have an improved support infrastructure for AGI research.
  • Then, as mentioned before, we have powerful commercial motivations.
  • Adding everything up, we should see more and more people working in this space.
  • And it should see rapid progress in the coming decade.

An increased focus on Artificial General Intelligence is part of what I’m calling the Humanity+ Agenda.  This is a set of 20 inter-linked priority areas for the next decade, spread over five themes: Health+, Education+, Technology+, Society+, and Humanity+.  Progress in the various areas should reinforce and support progress in other areas.

I’ve listed Artificial General Intelligence as part of the project to substantially improve our ability to reason and learn: Education+.  One factor that strongly feeds into AGI is improvements with ICT – including improvements in both ongoing hardware and software.  If you’re not sure what to study or which field to work in, ICT should be high on your list of fields to consider.  You can also consider the broader topic of helping to publicise information about accelerating technology – so that more and more people become aware of the associated opportunities, risks, context, and options.  To be clear, there are risks as well as opportunities in all these areas.  Artificial General Intelligence could have huge downsides as well as huge upsides, if not managed wisely.  But that’s a topic for another day.

In the meantime, I eagerly look forward to working with AGIs to help address all of the top priorities listed as part of the Humanity+ Agenda.

31 January 2010

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

13 January 2010

AI: why, and when

Filed under: AGI, usability — David Wood @ 4:26 pm

Here’s a good question, raised by Paul Beardow:

One question that always rattles around in my mind is “why are we trying to recreate the human mind anyway?” We have billions of those already…

You can build something that appears to be human, but what is the point of that? Why chase an goal that doesn’t actually provide us with more than we have already?

Paul also says,

What I don’t want is AI in products so that they have their own personality, but a better understanding of my own wishes and desires in how that product should interact with me…

I personally also really don’t think that logic by itself can lead to a system that can evolve human-like imagination, feelings or personality, nor that the human mind can be reduced to being a machine. It has elementary parts, but the constant rebuilding and evolving of information doesn’t really follow any logical rules that can be programmed. The structure of the brain depends on what happens to us during the day and how we interpret it according to the situation. That defies logic most of the time and is constantly evolving and changing.

My answer: there are at least six reasons why people are pursing the goal of human-like AI.

1. Financial savings in automated systems

We’re already used to encountering automated service systems when using the phone (eg to book a cinema ticket: “I think you’re calling about Kingston upon Thames – say Yes or No”) or when navigating a web form or other user interface.  These systems provoke a mixture of feelings in the people who use them.  I often become frustrated, thinking it would be faster to speak directly to a “real human being”.  But on other occasions the automation works surprisingly well.

To widen the set of applicability of such systems, into more open-ended environments, will require engineering much more human-style “common sense” into these automated systems.  The research to accomplish this may cost lots of money, but once it’s working, it could enable considerable cost savings in service provision, as real human beings can be replaced in a system by smart pieces of silicon.

2. Improving game play

A related motivation is as follows: games designers want to program in human-level intelligence into characters in games, so that these artificial entities manifest many of the characteristics of real human participants.

By the way: electronic games are big money!  As the description of tonight’s RSA meeting “Why games are the 21st century’s most serious business” puts it:

Why should we be taking video games more seriously?

  • In 2008 Nintendo overtook Google to become the world’s most profitable company per employee.
  • The South Korean government will invest $200 billion into its video games industry over the next 4 years.
  • The trading of virtual goods within games is a global industry worth over $10 billion a year.
  • Gaming boasts the world’s fastest-growing advertising market.

3. Improved user experience with complex applications

As well as reducing cost, human-level AI can in principle improve the experience of users while interacting with complex applications.

Rather than users thinking, “No you stupid machine, why don’t you realise what I’m trying to do…”, they will be pleasantly surprised: “Ah yes, that was in fact what I was trying to accomplish – how did you manage to figure that out?”

It’s as Paul says:

What I … want … in products [is]… a better understanding of my own wishes and desires in how that product should interact with me

These are products with (let’s say it) much more “intelligence” than at present.  They observe what is happening, and can infer motivation.  I call this AI.

4. A test of scientific models of the human mind

A different kind of motivation for studying human-level AI is to find ways of testing our understanding of the human mind.

For example, I think that creativity can be achieved by machines, following logical rules.  (The basic rules are: generate lots of ideas, by whatever means, and then choose the ideas which have interesting consequences.)  But it is good to test this.  So, computers can be programmed to mimic the possible thought patterns of great composers, and we can decide whether the output is sufficiently “creative”.

(There’s already quite a lot of research into this.  For one starting point, see the EE Times article “Composer harnesses artificial intelligence to create music“.)

Similarly, it will be fascinating to hear the views of human-level AIs about (for example) the “Top 5 Unsolved Brain Mysteries“.

5. To find answers to really tough, important questions

The next motivation concerns the desire to create AIs with considerably greater than human-level AI.  Assuming that human-level AI is a point en route to that next destination, it’s therefore an indirect motivation for creating human-level AI.

The motivation here is to ask superAIs for help with really tough, difficult questions, such as:

  • What are the causes – and the cures – for different diseases?
  • Are there safe geoengineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What is the resolution of the conflict between theories of gravity and theories of all the other elementary forces?

6. To find ways of extending human life and expanding human experience

If the above answers aren’t sufficient, here’s one more, which attracts at least some researchers to the topic.

If some theories of AI are true, it might be possible to copy human awareness and consciouness from residence in a biological brain into residence inside silicon (or other new computing substrate).  If so, then it may open new options for continued human consciousness, without having to depend on the fraility of a decaying human body.

This may appear a very slender basis for hope for significantly longer human lifespan, but it can be argued that all the other bases for such hope are equally slender, if not even less plausible.

OK, that’s enough answers for “why”.  But about the question “when”?

In closing, let me quickly respond to a comment by Martin Budden:

I’m not saying that I don’t believe that there will be advances in AI. On the contrary I believe, in the course of time, there will be real and significant advances in “general AI”. I just don’t believe that these advances will be made in the next decade.

What I’d like, at this point, is to be able to indicate some kind of provisional roadmap (also known as “work breakdown”) for when stepping stones of progress towards AGI might happen.

Without such a roadmap, it’s too difficult to decide when larger steps of progress are likely.  It’s just a matter of different people appearing to have different intuitions.

To be clear, discussions of Moore’s Law aren’t sufficient to answer this question.  Progress with the raw power of hardware is one thing, but what we need here is an estimate of progress with software.

Sadly, I’m not aware of any such breakdown.  If anyone knows one, please speak up!

Footnote: I guess the best place to find such a roadmap will be at the forthcoming “Third Conference on Artificial General Intelligence” being held in Lugano, Switzerland, on 5-8 March this year.

11 January 2010

AI, buggy software, and the Singularity

Filed under: AGI, Singularity — David Wood @ 12:00 am

I recently looked at three questions about the feasibility of significant progress with AI.  I’d like to continue that investigation, by looking at four more questions.

Q4: Given that all software is buggy, won’t this prevent the creation of any viable human-level AI?

Some people with a long involvement with software aren’t convinced that we can write software of sufficient quality that is of the complexity required for AI at the human-level (or beyond).  It seems to them that complex software is too unreliable.

It’s true that the software we use on a day-by-day basis – whether on a desktop computer, on a mobile phone, or via a web server – tends to manifest nasty bugs from time to time.  The more complex the system, the greater the likelihood of debilitating defects in the interactions between different subcomponents.

However, I don’t see this observation as ruling out the development of software that can manifest advanced AI.  That’s for two reasons:

First, different software projects vary in their required quality level.  Users of desktop software have become at least partially tolerant of defects in that software.  As users, we complain, but it’s not the end of the world, and we generally find workarounds.  As a result, manufacturers release software even though there’s still bugs in it.  However, for mission-critical software, the quality level is pushed a lot higher.  Yes, it’s harder to create software with high-reliability; but it can be done.

There are research projects underway to bring significantly higher quality software to desktop systems too.  For example, here’s a description of a Microsoft Research project, which is (coincidentally) named “Singularity”:

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

There would be a certain irony if techniques from the Microsoft Singularity project were used to create a high-reliability AI system that in turn was involved in the Technological Singularity.

Second, even if software has defects, that doesn’t (by itself) prevent it from it from being intelligent.  After all, the human brain itself has many defects – see my blogpost “The human mind as a flawed creation of nature“.  Sometimes we think much better after a good night’s rest!  The point is that the AI algorithms can include aspects of fault tolerance.

Q5: Given that we’re still far from understanding the human mind, aren’t we bound to be a long way from creating a viable human-level AI?

It’s often said that the human mind has deeply mysterious elements, such as consciousness, self-awareness, and free will.  Since there’s little consensus about these aspects of the human mind, it’s said to be unlikely that a computer emulation of these features will arrive any time soon.

However, I disagree that we have no understanding of these aspects of the human mind.  There’s a broad consensus among many philosophers and practitioners alike, that the main operation of the human mind is well explained by one or other variant of  “physicalism”.  As the Wikipedia article on the Philosophy of Mind states:

Most modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences…

Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues.

The book I mentioned previously, “Beyond AI” by J Storrs Hall, devotes several chapters to filling in aspects of this explanation.

It’s true that there’s still scope for head-scratching debates on what philosopher David Chalmers calls “the hard problem of consciousness”, which has various formulations:

  • “Why should physical processing give rise to a rich inner life at all?”
  • “How is it that some organisms are subjects of experience?”
  • “Why does awareness of sensory information exist at all?”
  • “Why is there a subjective component to experience?”…

However, none of these questions, by themselves, should prevent the construction of a software system that will be able to process questions posed in natural human language, and to give high quality humanly-understandable answers.  When that happens, the system will very probably seek to convince us that it has a similar inner conscious life to the one we have.  As J Storr Halls says, we’ll probably believe it.

Q6: Is progress with narrow fields of AI really relevant to the problem of general AI?

Martin Budden comments:

I don’t consider the advances in machine translation over the past decade an advance in AI, I more consider them the result of brute force analysis on huge quantities of text. I wouldn’t consider a car that could safely drive itself along a motorway an advance in AI, rather it would be the integration of a number of existing technologies. I don’t really consider the improvement of an algorithm that does a specific thing (search, navigate, play chess) an advance in AI, since generally such an improvement cannot be used outside its narrow field of application.

My own view is that these advances do help, in the spirit of “divide and conquer”.  I see the human mind as being made up of modules, rather than being some intractable whole.  Improving ability in, for example, translating text, or in speech recognition, will help set the scene for eventual general AI.

It’s true that some aspects of the human mind will prove harder to emulate than others – such as the ability to notice and form new concepts.  It may be the case that a theoretical breakthrough with this aspect will enable much faster overall progress, which will be able to leverage the work done on other modules.

Q7: With so many unknowns, isn’t all this speculation about AI futile?

It’s true that no one can predict, with any confidence, the date at which specific breakthrough advances in general AI are likely to happen.  The best that someone can achieve is a distribution of different dates with different probabilities.

However, I don’t accept any argument that “there’s been no fundamental breakthroughs in the last sixty years, so there can’t possibly be any fundamental breakthroughs in (say) the next ten years”.  That would be an invalid extrapolation.

That would be similar to the view expressed in 1903 by the distinguished astronomer and mathematician Simon Newcomb:

“Aerial flight is one of that class of problems with which man can never cope.”

Newcomb was no fool: he had good reasons for his scepticism.  As explained in the Wikipedia article about Newcomb:

In the October 22, 1903 issue of The Independent, Newcomb wrote that even if a man flew he could not stop. “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.” In addition, he had no concept of an airfoil. His “aeroplane” was an inclined “thin flat board.” He therefore concluded that it could never carry the weight of a man. Newcomb was specifically critical of the work of Samuel Pierpont Langley, who claimed that he could build a flying machine powered by a steam engine and whose initial efforts at flight were public failures…

Newcomb, apparently, was unaware of the Wright Brothers efforts whose [early] work was done in relative obscurity.

My point is that there does not seem to be any valid fundamental reason why the functioning of a human mind cannot be emulated via software; we may be just two or three good breakthroughs away from solving the remaining key challenges.  With the close attention of many commercial interests, and with the accumulation of fragments of understanding, the chances improve of some of these breakthroughs happening sooner rather than later.

Older Posts »

Blog at WordPress.com.