dw2

29 September 2018

Preview: Assessing the risks from super intelligent AI

Filed under: AGI, presentation — Tags: , , , , , — David Wood @ 1:14 am

The following video gives a short preview of the Funzing talk on “Assessing the risks from super-intelligent AI” that I’ll be giving shortly:

Note: the music in this video is “Berlin Approval” from Jukedeck, a company that is “building tools that use cutting-edge musical artificial intelligence to assist creativity”. Create your own at http://jukedeck.com.

Transcript of the video:

Welcome. My name is David Wood, and I’d like to tell you about a talk I give for Funzing.

This talk looks at the potential rapid increase in the ability of Artificial Intelligence, also known as AI.

AI is everywhere nowadays, and it is, rightly, getting a lot of attention. But the AI of a few short years in the future could be MUCH more powerful than today’s AI. Is that going to be a good thing, or a bad thing?

Some people, like the entrepreneur Elon Musk, or the physicist Stephen Hawking, say we should be very worried about the growth of super artificial intelligence. It could be the worst thing that ever happened to humanity, they say. Without anyone intending it, we could all become the victims of some horrible bugs or design flaws in super artificial intelligence. You may have heard of the “blue screen of death”, when Windows crashes. Well, we could all be headed to some kind of “blue screen of megadeath”.

Other people, like the Facebook founder Mark Zuckerberg, say that it’s “irresponsible” to worry about the growth of super AI. Let’s hurry up and build better AI, they say, so we can use that super AI to solve major outstanding human problems like cancer, climate change, and economic inequality.

A third group of people say that discussing the rise of super AI is a distraction and it’s premature to do so now. It’s nothing we need to think about any time soon, they say. Instead, there are more pressing short-term issues that deserve our attention, like hidden biases in today’s AI algorithms, or the need to retrain people to change their jobs more quickly in the wake of the rise of automation.

In my talk, I’ll be helping you to understand the strengths and weaknesses of all three of these points of view. I’ll give reasons why, in as little as ten years, we could, perhaps, reach a super AI that goes way beyond human capability in every aspect. I’ll describe five ways in which that super AI could go disastrously wrong, due to lack of sufficient forethought and coordination about safety. And I’ll be reviewing some practical initiatives for how we can increase the chance of the growth of super AI being a very positive development for humanity, rather than a very negative one.

People who have seen my talk before have said that it’s easy to understand, it’s engaging, it’s fascinating, and it provides “much to think about”.

What makes my approach different to others who speak on this subject is the wide perspective I can apply. This comes from the twenty five years in which I was at the heart of the mobile computing and smartphone industries, during which time I saw at close hand the issues with developing and controlling very complicated system software. I also bring ten years of experience more recently, as chair of London Futurists, in running meetings at which the growth of AI has often been discussed by world-leading thinkers.

I consider myself a real-world futurist: I take the human and political dimensions of technology very seriously. I also consider myself to be a radical futurist, since I believe that the not-so-distant future could be very different from the present. And we need to think hard about it beforehand, to decide if we like that outcome or not.

The topic of super AI is too big and important to leave to technologists, or to business people. There are a lot of misunderstandings around, and my talk will help you see the key issues and opportunities more clearly than before. I look forward to seeing you there! Thanks for listening.

9 November 2016

The missing vision

Filed under: politics, vision — Tags: , , , , , — David Wood @ 10:04 am

The United States of America have voted. In large numbers, electors have selected as their next President someone committed to:

  • Making it much harder for many types of people to enter the country
  • Deporting many of the current residents
  • Ramping up anti-Islam hostility
  • Denouncing global warming as a hoax
  • Undoing legislation to protect the environment
  • Reducing US support for countries facing hostile aggression
  • Dismantling the US deal with Iran over nuclear technology
  • Imposing punitive trade tariffs on China, likely triggering a trade war
  • Packing the Supreme Court with conservative judges who are opposed to choice.

Over the past months, I have tried – and usually failed – to persuade many of my online “friends” of the dangers of voting for Donald Trump. Smart people have, it seems, their own reasons for endorsing and welcoming this forthcoming “shock to the system”. People have been left behind by the pace of change, I’ve been told. Who can blame them for reaching for an outsider politician? Who can blame them for ignoring the objections of elites and “experts”?

Because of the pain and alienation being experienced by many electors, it’s no surprise – the argument runs – that they’re willing to try something different. Electors have proven themselves ready to overlook the evident character flaws, flip-flops, egotism, sexism, and indiscipline of Trump. These flaws seem to pale into insignificance beside the hope that a powerful outsider can deliver a hefty whack on the side of a dysfunctional Washington establishment. Their visceral hatred of present-day politics has led them to suspend critical judgement on the Trump juggernaut. That hatred also led them to lap up, unquestioningly, many of the bogus stories circulating on social media, that levelled all kinds on nonsense accusations on the leadership of the Democratic Party.

(For a thoughtful, heartfelt analysis of why so many people leave behind their critical judgement, see this Facebook essay by Eliezer Yudkowsky.)

There are already lots of arguments about who is to blame for this development – about whose shoulders failed to hold the responsibility to uphold sensible rather than fantasist politics. For example, see this Intelligence Squared debate on the motion “Blame the elites for the Trump phenomenon”.

My own analysis is that what was missing was (and is) a credible, compelling vision for how a better society is going to be built.

Electors were unconvinced by what they heard from Hillary Clinton, and (indeed) from the other non-Trump candidates for nomination. What they heard seemed too much of the same. They imagined that any benefits arising from a Clinton presidency would be experienced by the elites of society, rather than by the common citizen.

What’s needed, therefore, is the elaboration of a roadmap for how all members of society can benefit from the fruits of ongoing and forthcoming technological progress.

I call this vision the “Post-scarcity vision”. Because it involves the fundamental adoption of new technology, for progressive social purposes, it can also be called a “Technoprogressive vision”.

I’ve tried to share my thinking about that vision on numerous occasions over the last 5-10 years. Here are some slides taken from a presentation I gave last month to the IC Beyond (Imperial College Beyond) Society in Central London:

slide1

slide2

slide3

slide4

If you want to hear my explanation of these slides in the context of a longer discussion of the impact of automation and technological unemployment on society, here’s a video of the entire meeting (the “vision” slides are in the second half of the presentation):

As this post-scarcity technoprogressive vision evolves and matures, it has the potential to persuade more and more people that it – rather than Trump-style restrictions on movement, choice, and aggregation – represents a better route to a society that it better for everyone.

But beliefs have deep roots, and it’s going to require lots of hard, wise work to undo all kinds of prejudices en route to that better society.

Footnote: I first wrote a formal “Transhumanist Manifesto” in February 2013, here (with, ahem, somewhat flowery language). For other related declarations and manifestos, see this listing on H+Pedia. Out of the growing community of technoprogressives and transhumanists, there’s a lot of potential to turn these visions into practical roadmaps.

Blog at WordPress.com.