dw2

7 December 2017

The super-opportunities and super-risks of super-AI

Filed under: AGI, Events, risks, Uncategorized — Tags: , , — David Wood @ 7:29 pm

2017 has seen more discussion of AI than any preceding year.

There has even been a number of meetings – 15, to be precise – in the UK Houses of Parliament, of the APPG AI – an “All-Party Parliamentary Group on Artificial Intelligence”.

According to its website, the APPG AI “was set up in January 2017 with the aim to explore the impact and implications of Artificial Intelligence”.

In the intervening 11 months, the group has held 7 evidence meetings, 4 advisory group meetings, 2 dinners, and 2 receptions. 45 different MPs, along with 7 members of the House of Lords and 5 parliamentary researchers, have been engaged in APPG AI discussions at various times.

APPG-AI

Yesterday evening, at a reception in Parliament’s Cholmondeley Room & Terrace, the APPG AI issued a 12 page report with recommendations in six different policy areas:

  1. Data
  2. Infrastructure
  3. Skills
  4. Innovation & entrepreneurship
  5. Trade
  6. Accountability

The headline “key recommendation” is as follows:

The APPG AI recommends the appointment of a Minister for AI in the Cabinet Office

The Minister would have a number of different responsibilities:

  1. To bring forward the roadmap which will turn AI from a Grand Challenge to a tool for untapping UK’s economic and social potential across the country.
  2. To lead the steering and coordination of: a new Government Office for AI, a new industry-led AI Council, a new Centre for Data Ethics and Innovation, a new GovTech Catalyst, a new Future Sectors Team, and a new Tech Nation (an expansion of Tech City UK).
  3. To oversee and champion the implementation and deployment of AI across government and the UK.
  4. To keep public faith high in these emerging technologies.
  5. To ensure UK’s global competitiveness as a leader in developing AI technologies and capitalising on their benefits.

Overall I welcome this report. It’s a definite step in the right direction. Via a programme of further evidence meetings and workshops planned throughout 2018, I expect real progress can be made.

Nevertheless, it’s my strong belief that most of the public discussion on AI – including the discussions at the APPG AI – fail to appreciate the magnitude of the potential changes that lie ahead. There’s insufficient awareness of:

  • The scale of the opportunities that AI is likely to bring – opportunities that might better be called “super-opportunities”
  • The scale of the risks that AI is likely to bring – “super-risks”
  • The speed at which it is possible (though by no means guaranteed) that AI could transform itself via AGI (Artificial General Intelligence) to ASI (Artificial Super Intelligence).

These are topics that I cover in some of my own presentations and workshops. The events organisation Funzing have asked me to run a number of seminars with the title “Assessing the risks from superintelligent AI: Elon Musk vs. Mark Zuckerberg…”

DW Dec Funzing Singularity v2

The reference to Elon Musk and Mark Zuckerberg reflects the fact that these two titans of the IT industry have spoken publicly about the advent of superintelligence, taking opposing views on the balance of opportunity vs. risk.

In my seminar, I take the time to explain their differing points of view. Other thinkers on the subject of AI that I cover include Alan Turing, IJ Good, Ray Kurzweil, Andrew Ng, Eliezer Yudkowsky, Stuart Russell, Nick Bostrom, Isaac Asimov, and Jaan Tallinn. The talk is structured into six sections:

  1. Introducing the contrasting ideas of Elon Musk and Mark Zuckerberg
  2. A deeper dive into the concepts of “superintelligence” and “singularity”
  3. From today’s AI to superintelligence
  4. Five ways that powerful AI could go wrong
  5. Another look at accelerating timescales
  6. Possible responses and next steps

At the time of writing, I’ve delivered this Funzing seminar twice. Here’s a sampling of the online reviews:

Really enjoyed the talk, David is a good presenter and the presentation was very well documented and entertaining.

Brilliant eye opening talk which I feel very effectively conveyed the gravity of these important issues. Felt completely engaged throughout and would highly recommend. David was an excellent speaker.

Very informative and versatile content. Also easy to follow if you didn’t know much about AI yet, and still very insightful. Excellent Q&A. And the PowerPoint presentation was of great quality and attention was spent on detail putting together visuals and explanations. I’d be interested in seeing this speaker do more of these and have the opportunity to go even more in depth on specific aspects of AI (e.g., specific impact on economy, health care, wellbeing, job market etc). 5 stars 🙂

Best Funzing talk I have been to so far. The lecture was very insightful. I was constantly tuned in.

Brilliant weighing up of the dangers and opportunities of AI – I’m buzzing.

If you’d like to attend one of these seminars, three more dates are in my Funzing diary:

Click on the links for more details, and to book a ticket while they are still available 🙂

Advertisements

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

5 January 2014

Convictions and actions, 2014 and beyond

In place of new year’s resolutions, I offer five convictions for the future:

First, a conviction of profoundly positive near-term technological possibility. Within a generation – within 20 to 40 years – we could all be living with greatly improved health, intelligence, longevity, vigour, experiences, general well-being, personal autonomy, and social cohesion. The primary driver for this possibility is the acceleration of technological improvement.

In more detail:

  • Over the next decade – by 2025 – there are strong possibilities for numerous breakthroughs in fields such as 3D printing, wearable computing (e.g. Google Glass), synthetic organs, stem cell therapies, brain scanning, smart drugs that enhance consciousness, quantum computing, solar energy, carbon capture and storage, nanomaterials with super-strength and resilience, artificial meat, improved nutrition, rejuvenation biotech, driverless cars, robot automation, AI and Big Data transforming healthcare, improved collaborative decision-making, improved cryonic suspension of people who are biologically dead, and virtual companions (AIs and robots).
  • And going beyond that date towards mid-century, I envision seven “super” trends enabled by technology: trends towards super-materials (the fulfilment of the vision of nanotechnology), super-energy (the vision of abundance), super-health and super-longevity (extension of rejuvenation biotech), super-AI, super-consciousness, and super-connectivity.

Second, however, that greatly improved future state of humanity will require the deep application of many other skills, beyond raw technology, in order to bring it into reality. It will require lots of attention to matters of design, psychology, sociology, economics, philosophy, and politics.

Indeed, without profound attention to human and social matters, over the next 10-20 years, there’s a very real possibility that global society may tear itself apart, under mounting pressures. In the process, this fracturing and conflict could, among lots of other tragic consequences, horribly damage the societal engines for technological progress that are needed to take us forward to the positive future described above. It would bring about new dark ages.

Third, society needs a better calibre of thinking about the future.

Influential figures in politics, the media, academia, and religious movements all too often seem to have a very blinkered view about future possibilities. Or they latch on to just one particular imagining of the future, and treat it as inevitable, losing sight of the wider picture of uncertainties and potentialities.

So that humanity can reach its true potential, in the midst of the likely chaos of the next few decades, politicians and other global leaders need to be focusing on the momentous potential forthcoming transformation of the human condition, rather than the parochial, divisive, and near-term issues that seem to occupy most of their thinking at present.

Fourth, there are plenty of grounds for hope for better thinking about the future. In the midst of the global cacophony of mediocrity and distractedness, there are many voices of insight, vision, and determination. Gradually, a serious study of disruptive future scenarios is emerging. We should all do what we can to accelerate this emergence.

In our study of these disruptive future scenarios, we need to collectively accelerate the process of separating out

  • reality from hype,
  • science fact from science fiction,
  • credible scenarios from wishful thinking,
  • beneficial positive evolution from Hollywood dystopia,
  • human needs from the needs of businesses, corporations, or governments.

Futurism – the serious analysis of future possibilities – isn’t a fixed field. Just as technology improves by a virtuous cycle of feedback involving many participants, who collectively find out which engineering solutions work best for particular product requirements, futurism can improve by a virtuous cycle of feedback involving many participants – both “amateur” and “professional” futurists.

The ongoing process of technological convergence actually makes predictions harder, rather than easier. Small perturbations in one field can have big consequences in adjacent fields. It’s the butterfly effect. What’s more important than specific, fixed predictions is to highlight scenarios that are plausible, explaining why they are plausible, and then to generate debate on the desirability of these scenarios, and on how to enable and accelerate the desirable outcomes.

To help in this, it’s important to be aware of past and present examples of how technology impacts human experience. We need to be able to appreciate the details, and then to try to step back to understand the underlying principles.

Fifth, this is no mere armchair discussion. It’s not an idle speculation. The stakes are really high – and include whether we and our loved ones can be alive, in a state of great health and vitality, in the middle of this century, or whether we will likely have succumbed to decay, disease, division, destruction – and perhaps death.

We can, and should, all make a difference to this outcome. You can make a difference. I can make a difference.

Actions

In line with the above five convictions, I’m working on three large projects over the next six months:

Let me briefly comment on each of these projects.

LF banner narrow

Forthcoming London Futurists event: The Burning Question

The first “real-world” London Futurists meetup in 2014, on Saturday 18th January, is an in-depth analysis of what some people have described as the most complex and threatening issue of the next 10-30 years: accelerated global warming.

Personally I believe, in line with the convictions I listed above, that technology can provide the means to dissolve the threats of accelerated global warming. Carbon capture and storage, along with solar energy, could provide the core of the solution. But these solutions will take time, and we need to take some interim action sooner.

As described by the speaker for the event, writer and consulting editor Duncan Clark,

Tackling global warming will mean persuading the world to abandon oil, coal and gas reserves worth many trillions of dollars – at least until we have the means to put carbon back in the ground. The burning question is whether that can be done. What mix of technology, politics, psychology, and economics might be required? Why aren’t clean energy sources slowing the rate of fossil fuel extraction? Are the energy companies massively overvalued, and how will carbon-cuts affect the global economy? Will we wake up to the threat in time? And who can do what to make it all happen?

For more details and to RSVP, click here.

Note that, due to constraints on the speaker’s time, this event is happening on Saturday evening, rather than in the afternoon.

RSVPs so far are on the light side for this event, but now that the year-end break is behind us, I expect them to ramp up – in view of the extreme importance of this debate.

Forthcoming London Futurists Hangout On Air, with Ramez Naam

One week from today, on the evening of Sunday 12th January, we have our “Hangout on Air” online panel discussion, “Ramez Naam discusses Nexus, Crux, and The Infinite Resource”.

For more details, click here.

Here’s an extract of the event description:

Ramez Naam is arguably one of today’s most interesting and important writers on futurist topics, including both non-fiction and fiction.

  • For example, praise for his Nexus – Mankind gets an upgrade includes:
  • “A superbly plotted high tension technothriller… full of delicious moral ambiguity… a hell of a read.” – Cory Doctorow, Boing Boing
  • “A sharp, chilling look at our likely future.” – Charles Stross
  • “A lightning bolt of a novel. A sense of awe missing from a lot of current fiction.” – Ars Technica.

This London Futurists Hangout on Air will feature a live discussion between Ramez Naam and an international panel of leading futurists: Randal KoeneMichell Zappa, and Giulio Prisco. 

The discussion aims to cover:

  • The science behind the fiction: which elements are strongly grounded in current research, and which elements are more speculative?
  • The philosophy behind the fiction: how should people be responding to the deeply challenging questions that are raised by new technology?
  • Finding a clear path through what has been described as “the best of times and the worst of times” – is human innovation sufficient?
  • What lies next – new books in context.

I’ll add one comment to this description. Over the past week or so, I took the time to listen again to Ramez’s book “Nexus”, and I’m also well through the follow-up, “Crux”. I’m listening to them as audio books, obtained from Audible. Both books are truly engrossing, with a rich array of nuanced characters who undergo several changes in their personal philosophies as events unfold. It also helps that, in each case, the narrators of the audio books are first class.

Another reason I like these books so much is because they’re not afraid to look hard at both good outcomes and bad outcomes of disruptive technological possibility. I unconditionally recommend both books. (With the proviso that they contain some racy, adult material, and therefore may not be suitable for everyone.)

Forthcoming London Futurists Hangout On Air, AI and the end of the human era

I’ll squeeze in mention of one more forthcoming Hangout On Air, happening on Sunday 26th January.

The details are here. An extract follows:

The Hollywood cliché is that artificial intelligence will take over the world. Could this cliché soon become scientific reality, as AI matches then surpasses human intelligence?

Each year AI’s cognitive speed and power doubles; ours does not. Corporations and government agencies are pouring billions into achieving AI’s Holy Grail — human-level intelligence. Scientists argue that AI that advanced will have survival drives much like our own. Can we share the planet with it and survive?

The recently published book Our Final Invention explores how the pursuit of Artificial Intelligence challenges our existence with machines that won’t love us or hate us, but whose indifference could spell our doom. Until now, intelligence has been constrained by the physical limits of its human hosts. What will happen when the brakes come off the most powerful force in the universe?

This London Futurists Hangout on Air will feature a live discussion between the author of Our Final InventionJames Barrat, and an international panel of leading futurists: Jaan TallinnWilliam HertlingCalum Chace, and Peter Rothman.

The main panellist on this occasion, James Barrat, isn’t the only distinguished author on the panel. Calum Chace‘s book “Pandora’s Brain”, which I’ve had the pleasure to read ahead of publication, should go on sale some time later this year. William Hertling is the author of a trilogy of novels

  • Avogadro Corp: The Singularity Is Closer Than It Appears,
  • A.I. Apocalypse,
  • The Last Firewall.

The company Avogadro Corp that features in this trilogy has, let’s say, some features in common with another company named after a large number, i.e. Google. I found all three novels to be easy to read, as well as thought-provoking. Without giving away plot secrets, I can say that the books feature more than one potential route for smarter-than-human general purpose AI to emerge. I recommend them. Start with the first, and see how you get on.

Anticipating 2025

Anticipating Header Star

The near future deserves more of our attention.

A good way to find out about the Anticipating 2025 event is to look at the growing set of “Speaker preview” videos that are available at http://anticipating2025.com/previews/.

You’ll notice that at least some of these videos have captions available, to help people to catch everything the speakers say.

These captions have been produced by a combination of AI and human intelligence:

  • Google provides automatically generated transcripts, from its speech recognition engine, for videos uploaded to YouTube
  • A team of human volunteers works through these transcripts, cleaning them up, before they are published.

My thanks go to everyone involved so far in filming and transcribing the speakers.

Registration for this conference requires payment at time of registration. There are currently nearly 50 people registered, which is a good start (with more than two months to go) towards filling the venue’s capacity of 220.

Early bird registration, for both days, is pegged at £40. I’ll keep early bird registration open until the first 100 tickets have been sold. Afterwards, the price will increase to £50.

Smartphones and beyond

LFS Banner

Here’s a brief introduction to this book:

The smartphone industry has seen both remarkable successes and remarkable failures over the last two decades. Developments have frequently confounded the predictions of apparent expert observers. What does this rich history have to teach analysts, researchers, technology enthusiasts, and activists for other forms of technology adoption and social improvement?

As most regular readers of this blog know, I’ve worked in mobile computing for 25 years. That includes PDAs (personal digital assistants) and smartphones. In these fields, I’ve seen numerous examples of mobile computing becoming more powerful, more useful, and more invisible – becoming a fundamental part of the fabric of society. Smartphone technology which was at one time expected to be used by only a small proportion of the population – the very geeky or the very rich – is now in regular use by over 50% of the population in many countries in the world.

As I saw more and more fields of human interest on the point of being radically transformed by mobile computing and smartphone technology, the question arose in my mind: what’s next? Which other fields of human experience will be transformed by smartphone technology, as it becomes still smaller, more reliable, more affordable, and more powerful? And what about impacts of other kinds of technology?

Taking this one step further: can the processes which have transformed ordinary phones into first smartphones and then superphones be applied, more generally, to transform “ordinary humans” (humans 1.0, if you like), via smart humans or trans humans, into super humans or post humans?

These are the questions which have motivated me to write this book. You can read a longer introduction here.

I’m currently circulating copies of the first twenty chapters for pre-publication review. The chapters available are listed here, with links to the opening paragraphs in each case, and there’s a detailed table of contents here.

As described in the “Downloads” page of the book’s website, please let me know if there are any chapters you’d particularly like to review.

Blog at WordPress.com.