dw2

14 May 2018

The key questions about UBIA

The first few times I heard about the notion of Universal Basic Income (UBI), I said to myself, that’s a pretty dumb idea.

Paying people without them doing any work is going to cause big problems for society, I thought. It’s going to encourage laziness, and discourage enterprise. Why should people work hard, if the fruits of their endeavour are taken away from them to be redistributed to people who can’t be bothered to work? It’s not fair. And it’s a recipe for social decay.

But since my first encounters with the idea of UBI, my understanding has evolved a long way. I have come to see the idea, not as dumb, but as highly important. Anyone seriously interested in the future of human society ought to keep abreast of the discussion about UBI:

  • What are the strengths and (yes) the weaknesses of UBI?
  • What alternatives could be considered, that have the strengths of UBI but avoid its weaknesses?
  • And, bearing in mind that the most valuable futurist scenarios typically involve the convergence (or clash) of several different trend analyses, what related ideas might transform our understanding of UBI?

For these reasons, I am hosting a day-long London Futurists event at Birkbeck College, Central London, on Saturday 2nd June, with the title “Universal Basic Income and/or Alternatives: 2018 update”.

The event is defined by the question,

What do we know, in June 2018, about Universal Basic Income and its alternatives (UBIA), that wasn’t known, or was less clear, just a few years ago?

The event website highlights various components of that question, which different speakers on the day will address:

  • What are the main risks and issues with the concept of UBIA?
  • How might the ideas of UBIA evolve in the years ahead?
  • If not a UBI, what alternatives might be considered, to meet the underlying requirements which have led many people to propose a UBI?
  • What can we learn from the previous and ongoing experiments in Basic Income?
  • What are the feasible systems (new or increased taxes, or other means) to pay for a UBIA?
  • What steps can be taken to make UBIA politically feasible?
  • What is a credible roadmap for going beyond a “basic” income towards enabling attainment of a “universal prosperity” by everyone?

As you can see from the event website, an impressive list of speakers have kindly agreed to take part. Here’s the schedule for the day:

09:30: Doors open
10:00: Chair’s welcome: The questions that deserve the most attention: David Wood
10:15: Opening keynote: Basic Income – Making it happenProf Guy Standing
11:00: Implications of Information TechnologyProf Joanna Bryson
11:30: Alternatives to UBI – Exploring the PossibilitiesRohit TalwarHelena Calle and Steve Wells
12:15: Q&A involving all morning speakers
12:30: Break for lunch (lunch not provided)

14:00: Basic Income as a policy and a perspective: Barb Jacobson
14:30: Implications of Artificial Intelligence on UBIATony Czarnecki
15:00: Approaching the Economic SingularityCalum Chace
15:30: What have we learned? And what should we do next? David Wood
16:00-16:30: Closing panel involving all speakers
16:30: Event closes. Optional continuation of discussion in nearby pub

A dumb idea?

In the run-up to the UBIA 2018 event, I’ll make a number of blogposts anticipating some of the potential discussion on the day.

First, let me return to the question of whether UBI is a dumb idea. Viewing the topic from the angle of laziness vs. enterprise is only one possible perspective. As is often the case, changing your perspective often provides much needed insight.

Instead, let’s consider the perspective of “social contract”. Reflect on the fact that society already provides money to people who aren’t doing any paid work. There are basic pension payments for everyone (so long as they are old enough), basic educational funding for everyone (so long as they are young enough), and basic healthcare provisions for people when they are ill (in most countries of the world).

These payments are part of what is called a “social contract”. There are two kinds of argument for having a social contract:

  1. Self-interested arguments: as individuals, we might need to take personal benefit of a social contract at some stage in the future, if we unexpectedly fall on hard times. What’s more, if we fail to look after the rest of society, the rest of society might feel aggrieved, and rise up against us, pitchforks (or worse) in hand.
  2. Human appreciation arguments: all people deserve basic stability in their life, and a social contract can play a significant part in providing such stability.

What’s harder, of course, is to agree which kind of social contract should be in place. Whole libraries of books have been written on that question.

UBI can be seen as fitting inside a modification of our social contract. It would be part of what supporters say would be an improved social contract.

Note: although UBI is occasionally suggested as a replacement for the entirety of the current welfare system, it is more commonly (and, in my view, more sensibly) proposed as a replacement for only some of the current programmes.

Proponents of UBI point to two types of reason for including UBI as part of a new social contract:

  1. Timeless arguments – arguments that have been advanced in various ways by people throughout history, such as Thomas More (1516), Montesquieu (1748), Thomas Paine (1795), William Morris (1890), Bertrand Russell (1920), Erich Fromm (1955), Martin Luther King (1967), and Milton Friedman (1969)
  2. Time-linked arguments – arguments that foresee drastically changed circumstances in the relatively near future, which increase the importance of adopting a UBI.

Chief among the time-linked arguments are that the direct and indirect effects of profound technological change is likely to transform the work environment in unprecedented ways. Automation, powered by AI that is increasingly capable, may eat into more and more of the skills that we humans used to think are “uniquely human”. People who expected to earn money by doing various tasks may find themselves unemployable – robots will do these tasks more reliably, more cheaply, and with greater precision. People who spend some time retraining themselves in anticipation of a new occupation may find that, over the same time period, robots have gained the same skills faster than humans.

That’s the argument for growing technological unemployment. It’s trendy to criticise this argument nowadays, but I find the criticisms to be weak. I won’t repeat all the ins and outs of that discussion now, since I’ve covered them at some length in Chapter 4 of my book Transcending Politics. (An audio version of this chapter is currently available to listen to, free of charge, here.)

A related consideration talks, not about technological unemployment, but about technological underemployment. People may be able to find paid work, but that work pays considerably less than they expected. Alternatively, their jobs may have many rubbishy aspects. In the terminology of David Graeber, increasing numbers of jobs are “bullshit jobs”. (Graeber will be speaking on that very topic at the RSA this Thursday. At time of writing, tickets are still available.)

Yet another related concept is that of the precariat – people whose jobs are precarious, since they have no guarantee of the number of hours of work they may receive in any one week. People in these positions would often prefer to be able to leave these jobs and spend a considerable period of time training for a different kind of work – or starting a new business, with all the risks and uncertainties entailed. If a UBI were available to them, it would give them the stability to undertake that personal voyage.

How quickly will technological unemployment and technological underemployment develop? How quickly will the proportion of bullshit jobs increase? How extensive and socially dangerous will the precariat become?

I don’t believe any futurist can provide crisp answers to these questions. There are too many unknowns involved. However, equally, I don’t believe anyone can say categorically that these changes won’t occur (or won’t occur any time soon). My personal recommendation is that society needs to anticipate the serious possibility of relatively rapid acceleration of these trends over the next couple of decades. I’d actually put the probability of a major acceleration in these trends over the next 20 years as greater than 50%. But even if you assess the odds more conservatively, you ought to have some contingency plans in mind, just in case the pace quickens more than you expected.

In other words, the time-linked arguments in favour of exploring a potential UBI have considerable force.

As it happens, the timeless arguments may gain increased force too. If it’s true that the moral arc of history bends upwards – if it’s true that moral sensibilities towards our fellow humans increase over the passage of time – then arguments which at one time fell below society’s moral radar can gain momentum in the light of collective experience and deliberative reflection.

An impractical idea?

Many people who are broadly sympathetic to the principle of UBI nevertheless consider the concept to be deeply impractical. For example, here’s an assessment by veteran economics analyst John Kay, in his recent article “Basic income schemes cannot work and distract from sensible, feasible and necessary welfare reforms”:

The provision of a universal basic income at a level which would provide a serious alternative to low-paid employment is impossibly expensive. Thus, a feasible basic income cannot fulfil the hopes of some of the idea’s promoters: it cannot guarantee households a standard of living acceptable in a modern society, it cannot compensate for the possible disappearance of existing low-skilled employment and it cannot eliminate “bullshit jobs”. Either the level of basic income is unacceptably low, or the cost of providing it is unacceptably high. And, whatever the appeal of the underlying philosophy, that is essentially the end of the matter.

Kay offers this forthright summary:

Attempting to turn basic income into a realistic proposal involves the reintroduction of elements of the benefit system which are dependent on multiple contingencies and also on income and wealth. The outcome is a welfare system which resembles those that already exist. And this is not surprising. The complexity of current arrangements is not the result of bureaucratic perversity. It is the product of attempts to solve the genuinely difficult problem of meeting the variety of needs of low-income households while minimising disincentives to work for households of all income levels – while ensuring that the system established for that purpose is likely to sustain the support of those who are required to pay for it.

I share Piachaud’s conclusion that basic income is a distraction from sensible, feasible and necessary welfare reforms. As in other areas of policy, it is simply not the case that there are simple solutions to apparently difficult issues which policymakers have hitherto been too stupid or corrupt to implement.

Supporters of UBI have rebuttals to this analysis. Some of these rebuttals will no doubt be presented at the UBIA 2018 event on 2nd June.

One rebuttal seeks to rise above “zero sum” considerations. Injecting even a small amount of money into everyone’s hands can have “multiplier” effects, as that new money passes in turn through several people’s hands. One person’s spending is another person’s income, ready for them to spend in turn.

Along similar lines, Professor Guy Standing, who will be delivering the opening keynote at UBIA 2018, urges readers of his book Basic Income: And How We Can Make It Happen to consider positive feedback cycles: “the likely impact of extra spending power on the supply of goods and services”. As he says,

In developing countries, and in low-income communities in richer countries, supply effects could actually lower prices for basic goods and services. In the Indian basic income pilots, villagers’ increased purchasing power led local farmers to plant more rice and wheat, use more fertilizer and cultivate more of their land. Their earnings went up, while the unit price of the food they supplied went down. The same happened with clothes, since several women found it newly worthwhile to buy sewing machines and material. A market was created where there was none before.

A similar response could be expected in any community where there are people who want to earn more and do more, alongside people wanting to acquire more goods and services to improve their living standard.

(I am indebted to Standing’s book for many other insights that have influenced my thinking and, indeed, points raised in this blogpost. It’s well worth reading!)

There’s a broader point that needs to be raised, about the “prices for basic goods and services”. Since a Basic Income needs to cover payments for these goods and services, two approaches are possible:

  1. Seek to raise the level of Basic Income payments
  2. Seek to lower the cost of basic goods and services.

I believe both approaches should be pursued in parallel. The same technologies of automation that pose threats to human employment also hold the promise for creating goods and services at significantly lower costs (and with higher quality). However, any such reduction in cost sits in tension with the prevailing societal focus on boosting economic prices (and increasing GDP). It is for this reason that we need a change of societal values as well as changes in the mechanics of the social contract.

The vision of goods and services having prices approaching zero is, by the way, sometimes called “the Star Trek economy”. Futurist Calum Chace – another of the UBIA 2018 speakers – addresses this topic is his provocatively titled book The Economic Singularity: Artificial intelligence and the death of capitalism. Here’s an extract from one of his blogposts, a “un-forecast” (Chace’s term) for a potential 2050 scenario, “Future Bites 7 – The Star Trek Economy”, featuring Lauren (born 1990):

The race downhill between the incomes of governments and the costs they needed to cover for their citizens was nerve-wracking for a few years, but by the time Lauren hit middle age it was clear the outcome would be good. Most kinds of products had now been converted into services, so cars, houses, and even clothes were almost universally rented rather than bought: Lauren didn’t know anyone who owned a car. The cost of renting a car for a journey was so close to zero that the renting companies – auto manufacturers or AI giants and often both – generally didn’t bother to collect the payment. Money was still in use, but was becoming less and less necessary.

As a result, the prices of most asset classes had crashed. Huge fortunes had been wiped out as property prices collapsed, especially in the hot-spot cities, but few people minded all that much as they could get whatever they needed so easily.

As you may have noticed, the vision of a potential future “Star Trek” economy is part of the graphic design for UBIA 2018.

I’ll share one further comment on the question of the affordability of UBI. Specifically, I’ll quote some comments made by Guardian writer Colin Holtz in the wake of the discovery of the extent of tax evasion revealed by the Panama Papers. The article by Holtz has the title “The Panama Papers prove it: America can afford a universal basic income”. Here’s an extract:

If the super-rich actually paid what they owe in taxes, the US would have loads more money available for public services.

We should all be able to agree: no one should be poor in a nation as wealthy as the US. Yet nearly 15% of Americans live below the poverty line. Perhaps one of the best solutions is also one of the oldest and simplest ideas: everyone should be guaranteed a small income, free from conditions.

Called a universal basic income by supporters, the idea has has attracted support throughout American history, from Thomas Paine to Martin Luther King Jr. But it has also faced unending criticism for one particular reason: the advocates of “austerity” say we simply can’t afford it – or any other dramatic spending on social security.

That argument dissolved this week with the release of the Panama Papers, which reveal the elaborate methods used by the wealthy to avoid paying back the societies that helped them to gain their wealth in the first place…

While working and middle-class families pay their taxes or face consequences, the Panama Papers remind us that the worst of the 1% have, for years, essentially been stealing access to Americans’ common birthright, and to the benefits of our shared endeavors.

Worse, many of those same global elite have argued that we cannot afford to provide education, healthcare or a basic standard of living for all, much less eradicate poverty or dramatically enhance the social safety net by guaranteeing every American a subsistence-level income.

The Tax Justice Network estimates the global elite are sitting on $21–32tn of untaxed assets. Clearly, only a portion of that is owed to the US or any other nation in taxes – the highest tax bracket in the US is 39.6% of income. But consider that a small universal income of $2,000 a year to every adult in the US – enough to keep some people from missing a mortgage payment or skimping on food or medicine – would cost only around $563bn each year.

This takes us from the question of affordability to the question of political feasibility. Read on…

A politically infeasible idea?

A potential large obstacle to adopting UBI is that powerful entities within society will fight hard against it, being opposed to any idea of increased taxation and a decline in their wealth. These entities don’t particularly care that the existing social contract provides a paltry offering to the poor and precarious in society – or to those “inadequates” who happen to lose their jobs and their standing in the economy. The existing social contract provides them personally (and those they consider their peers) with a large piece of the cake. They’d like to keep things that way, thank you very much.

They defend the current setup with ideology. The ideology states that they deserve their current income and wealth, on account of the outstanding contributions they have made to the economy. They have created jobs, or goods, or services of one sort or another, that the marketplace values. And no-one has any right to take their accomplishments away from them.

In other words, they defend the status quo with a theory of value. In order to overcome their resistance to UBIA, I believe we’ll need to tackle this theory of value head on, and provide a better theory in its place. I’ll pick up that thread of thought shortly.

But an implementation of UBI doesn’t need to happen “big bang” style, all at once. It can proceed in stages, starting with a very low level, and (all being well) ramping up from there in phases. The initial payment from UBI could be funded from new types of tax that would, in any case, improve the health of society:

  • A tax on financial transactions (sometimes called a “Tobin tax”) – that will help to put the brakes on accelerated financial services taking place entirely within the financial industry (without directly assisting the real economy)
  • A “Greenhouse gas tax” (such as a “carbon tax”) on activities that generate greenhouse gas pollution.

Continuing the discussion

The #ubia channel in the newly created London Futurists Slack workspace awaits comments on this topic. For a limited time, members and supporters of London Futurists can use this link to join that workspace.

1 January 2017

The best books I read in 2016

Antidotes to the six horsemen of the Trumpocalypse

Here’s one of my deeply held beliefs. We owe it to ourselves to take best advantage of the insights and experiences of those who have gone before us. Where researchers have seen more clearly or understood more deeply than their predecessors or contemporaries, we should pay special attention to their words and concepts. Where these researchers have written books that make accessible key aspects of their hard-won expertise, we should prioritise finding the time to read these books.

But in 2016, not everyone agreed that expertise is worth attention. Experts are over-rated, we heard. The elites deserve a comeuppance.

That sentiment is an ominous echo of the Chinese Cultural Revolution and its opposition to the “four olds” – a revolution which, between the years 1966-1976, resulted in horrific damage to the country and many millions of deaths. A later candid assessment by the Chinese government described that period of wilful ignorance as being

Responsible for the most severe setback and the heaviest losses suffered by the Party, the country, and the people since the founding of the People’s Republic.

I have no truck for those rabble-rousers who declared in 2016 that “we have had enough of experts”. I choose expertise every time, over wishful thinking, hearsay, and dogmatism. I choose to keep on educating myself in the arts of critical thinking, rather than bowing my honest opinion to the rants of the populist press or the false certainties of the demagogues in our midst.

At a time where the six dreadful horsemen of the Trumpocalypse are gathering speed – when perverse interactions are growing more unpredictable between radical over-confidence, divisive boasts of “my tribe first”, enthralment to personal egos, trigger-happy vindictiveness, shameless lying, and fake news designed to inflame rather than to enlighten – we need calm, rational, evidence-based thinking more than ever.

The hard thing, of course, is knowing where true expertise really lies. It can be difficult to distinguish the trustworthy experts from self-declared “experts”. And it is important to perceive the limitations of the expertise of any one person or any one discipline. Thankfully, these tasks can be aided  group intelligence, as we collectively develop an appreciation for which writers are the most reliable in particular areas.

In that spirit, I list below the books that I read and rated as “5 stars” during 2016 on GoodReads. I tend to rate books that highly if:

  • They contain novel material which addresses highly important themes
  • They are well-written – giving good evidence and rationale for the points of view they advance
  • They maintained my interest all the way to the end of the book.

Hopefully my brief reviews will provide some inspiration to guide you in your own reading, research, and projects during 2017. Do let me know.

(Click on any book cover below, to visit the GoodReads page for the book.)

Homo Deus: A Brief History of Tomorrow

homo-deus

Written by Yuval Noah Harari. Of all the books on my list, this is the one that provides the largest perspective. This book explains how the three terrible scourges which have confronted humans throughout history – plague, famine, and war – will be replaced in the 21st century by three huge new projects, labelled as “immortality”, “happiness”, and “divinity”. Here are three brief quotes from Harari’s book on these grand projects:

  • “Struggling against old age and death will merely carry on the time-honoured fight against famine and disease, and manifest the supreme value of contemporary culture: the worth of human life… Modern science and modern culture don’t think of death as a metaphysical mystery, and they certainly don’t view death as the source of life’s meaning. Rather, for modern people death is a technical problem that we can and should solve”
  • “Being happy doesn’t come easy. Despite our unprecedented achievements in the last few decades, it is far from obvious that contemporary people are significantly more satisfied than their ancestors in bygone years. Indeed, it is an ominous sign that despite higher prosperity, comfort and security, the rate of suicide in the developed world is also much higher than in traditional societies… The bad news is that pleasant sensations quickly subside and sooner or later turn into unpleasant ones… This is all the fault of evolution. For countless generations our biochemical system adapted to increasing our chances of survival and reproduction, not our happiness”
  • “In seeking bliss and immortality humans are in fact trying to upgrade themselves into gods. Not just because these are divine qualities, but because in order to overcome old age and misery humans will first have to acquire godlike control of their own biological substratum… If we ever have the power to engineer death and pain out of our system, that same power will probably be sufficient to engineer our system in almost any manner we like, and manipulate our organs, emotions and intelligence in myriad ways. You could buy for yourself the strength of Hercules, the sensuality of Aphrodite, the wisdom of Athena or the madness of Dionysus if that is what you are into.”

Harari’s book also makes plain that these projects risk enormous upheavals in society – potentially facturing humanity into “the near gods” and “the near useless”. Even that thought isn’t the largest in the book. He writes near the end:

  • “The Internet-of-All-Things may soon create such huge and rapid data flows that even upgraded human algorithms cannot handle it. When the car replaced the horse-drawn carriage, we didn’t upgrade the horses – we retired them. Perhaps it is time to do the same with Homo sapiens…”
  • “When genetic engineering and artificial intelligence reveal their full potential, liberalism, democracy and free markets might become as obsolete as flint knives, tape cassettes, Islam and communism.”

I therefore pick Homo Deus: A Brief History of Tomorrow as the single most profound book of 2016.

Note: I presented a personal review of this book near the start of a London Futurists event on the 4th of October. Here’s a video recording:
.

The Story of the Human Body: Evolution, Health, and Disease

the-story-of-the-human-body

Written by Daniel E. Lieberman. This book shares with Homo Deus the fact that it has an enormous scope. It casts a careful eye back over the long prehistory of human (and hominid) evolution, and draws fascinating conclusions about problems of disease and health experienced by people in the 21st century.

The book presents a wealth of compelling evidence about the various stages of evolution between ape and modern-day human. It also introduces the concepts of “dysevolution” and “mismatch diseases”. Aspects of human nature that made great sense in our previous environments sit uneasily in the new environments in which we now exist. This includes aspects of our physiology and aspects of our psychology.

To what extent can these aspects of our physiology and psychology be re-engineered, using 21st century skills in genetics, nanotech, 3D printing, smart drugs, and so so? Lieberman is cautious in drawing conclusions, suggesting that we’ll find it easier to re-engineer our environment than to reengineer ourselves. His view deserves attention, even though I believe he underestimates the pace of forthcoming technological change.

The Industries of the Future

the-industries-of-the-future

Written by Alec J. Ross, who spent four years working as Senior Advisor for Innovation to the Hilary Clinton when she was Secretary of State for Defence. The style of writing is highly accessible to people in political roles – whether in office, in the civil service, or in an advisory capacity.

The subjects Ross covers – including robots, genomics, cryptocurrency, cyberwarfare, big data, and the Internet of Things – can also be found in books by other futurists. But he provides a refreshingly international perspective, highlighting ways in which different parts of the world are adapting (or failing to adapt) to various technological trends. He also has a candid view on potential downsides to these technology trends, alongside their potential upsides. He gives plenty of reasons for believing that there will many large changes ahead, but he emphasises that the actual outcomes will need careful shepherding.

If Hilary Clinton had become the US President, I would have felt comfortable in knowing that she could draw on insight about future trends from such a well-informed, balanced advisor. This is not an author who offers brash over-confidence or wishful thinking.

Bitcoin: the Future of Money?

bitcoin-the-future-of-money

Written by Dominic Frisby. Bitcoin, along with its underlying “blockchain” technology, remains the subject of a great deal of speculation as 2016 draws to a close. I read a number of books on this topic in the last 12 months. Of these books, this was the one I enjoyed the most.

Frisby has a pleasant conversational style, but also has an eye for the big picture. Bitcoin/blockchain is too important a topic to ignore. The biggest disruptions it creates may well be in areas outside of present-day mainstream focus.

London Futurists will be returning to Bitcoin and/or blockchain several times in the months ahead. Watch this space!

The Future of the Professions: How Technology Will Transform the Work of Human Experts

the-future-of-the-professions

Written by by Richard Susskind and Daniel Susskind. This book provides a comprehensive account of how technology and automation are transforming work within professions such as law, auditing, education, architecture, healthcare, accounting, and the clergy.

The writers – a father and son – have been researching this field since the 1980s. They have interviewed leading practitioners from numerous professions, and are fully aware of the arguments as to why automation will slow down in its impact on the workforce. They assess these arguments at great length (perhaps almost too fully), and give strong reasons why all professions will, on the contrary, be significantly transformed by ever-more powerful software in the decades ahead. As they make clear, this is not something to be feared, but is something that will provide low-cost high-quality expertise to ever-larger numbers of people – rather than such expertise being accessible to the wealthy.

Inventing the Future: Postcapitalism and a World Without Work

inventing-the-future

Written by Nick Srnicek and Alex Williams. This book makes a powerful case that movements for political change need to find a powerful over-arching positive vision. Merely “occupying” and “criticising” isn’t going to take things very far.

The vision offered in this book is that automation, rather than being seen as a threat to jobs, should be embraced as a precondition for a new society in which people no longer need to work.

Note: the authors gave a presentation about their ideas at a London Futurists event on the 20th of August. Here’s the video recording from that event:

The Economic Singularity: Artificial intelligence and the death of capitalism

the-economic-singularity

Written by Calum Chace. This book is the third on my list that focuses on technological unemployment as caused by automation and AI (artificial intelligence). Of the three, it’s probably the easiest to read, and the one that paints the widest context.

Like Srnicek and Williams (the authors of Inventing the Future: Postcapitalism and a World Without Work), Chace foresees that technological unemployment may portend the end of capitalism. Whether this forthcoming “Economic Singularity” will be a positive or negative development remains to be seen. The Economic Singularity therefore shares some of the characteristics of the “Technological Singularity”, which Chace also covers in this book.

Note: the author gave a presentation on his ideas to London Futurists on the 8th of October. Here’s the video recording:
.

The Gene: An Intimate History

the-gene-an-intimate-history

Written by Siddhartha Mukherjee. This book covers the history of ideas and experiments on the subject of genetic inheritance, from the thinkers of ancient Greece right up to the latest research. Along the way, the author weaves in accounts from the medical experiences of his own family members that suffered from inherited diseases. He is a compelling story-teller. He is also an accomplished cancer physician, with training from Stanford University, University of Oxford, Harvard Medical School, and Columbia University Medical Center.

I was familiar with many of the historical episodes from my own prior reading, but I learnt a great deal from the additional material assembled by Mukherjee – for example about the attempts at different times by eugenics enthusiasts to alter society by human interference with “natural selection”. The book is particularly strong on the interplay of nature and nurture.

I also appreciated the way the author highlighted the drawbacks of the haphazard quality control  in the early experiments in gene replacement therapies – experiments with tragic consequences. The lack of care in these experiments led to an understandable institutional backlash which arguably set back this field of therapies by around a decade.

By the time I read this book, I had already published my own book “The Abolition of Aging: The forthcoming radical extension of healthy human longevity”. I was relieved to find no reason to alter any of the conclusions or recommendations in my book as a result of the magisterial quantity of research reviewed by Mukherjee.

The Youth Pill: Scientists at the Brink of an Anti-Aging Revolution

the-youth-pill

Written by David Stipp. During the course of writing my own book “The Abolition of Aging” I consulted many books on medical treatments for anti-aging. I read this particular book all the way through, twice, at different stages of my research.

Stipp has been writing on the subject of medicine and aging for leading publications such as the Wall Street Journal and Fortune since the early 1980s. Over that time, he has built up an impressive set of contacts within the industry.

Stipp’s book is full of fascinating nuggets of insight, including useful biographical background details about many of the researchers who are pushing back the boundaries of knowledge in what is still a relatively young field.

Inhuman Bondage: The Rise and Fall of Slavery in the New World

inhuman-bondage

Written by David Brion Davis. This sweeping account of the history of slavery draws on many decades of the author’s research as one of the preeminent researchers in the field. The book interweaves heart-rending accounts with careful reflection. This is a story that includes both dreadful low points and inspiring high points of human behaviour. There’s a great deal to be learned from it.

Davis quotes with approval the prominent Irish historian W.E.H. Lecky who concluded in 1869 that:

The unwearied, unostentatious, and inglorious crusade of England against slavery very may probably be regarded as among the three or four perfectly virtuous acts recorded in the history of nations.

The thorough analysis by Davis makes it clear that:

  • The abolition of slavery was by no means inevitable or predetermined
  • There were strong arguments against the abolition of slavery – arguments raised by clever, devout people in both the United States and the United Kingdom – arguments concerning economic well-being, among many other factors
  • The arguments of the abolitionists were rooted in a conception of a better way of being a human – a way that avoided the harsh bondage and subjugation of the slave trade, and which would in due course enable many millions of people to fulfil a much greater potential
  • The cause of the abolition of slavery was significantly advanced by public activism – including pamphlets, lectures, petitions, and municipal meetings
  • The abolition of slavery cannot be properly understood without appreciating the significance of moral visions that “could transcend narrow self-interest and achieve genuine reform.”

On reason I read this book was to consider the strengths of a comparison I wanted to make in my own writing: a comparison between the abolition of slavery and the abolition of aging. My conclusion is that the comparison is a good one – although I recognise that some readers find it shocking:

  • With its roots in the eighteenth century, and growing in momentum as the nineteenth century proceeded, the abolition of slavery eventually became an idea whose time had come – thanks to brave, smart, persistent activism by men and women with profound conviction
  • With a different set of roots in the late twentieth century, and growing in momentum as the twenty-first century proceeds, the abolition of aging can, likewise, become an idea whose time has come. It’s an idea about an overwhelmingly better future for humanity – a future that will allow billions of people to fulfil a much greater potential. But as well as excellent engineering – the creation of reliable, accessible rejuvenation therapies – this project will also require brave, smart, persistent activism, to change the public landscape from one hostile (or apathetic) to rejuveneering into one that deeply supports it.

American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity

american-amnesia

Written by Jacob S. Hacker and Paul Pierson. Many of the great social reforms of the last few centuries required sustained government action to make them happen. But governments need to work effectively alongside the remarkable capabilities of the market economy. Getting the right balance between these two primal forces is crucial.

The authors of this book defend a very interesting viewpoint, namely that the mixed economy was the most important social innovation of the 20th century:

The mixed economy spread a previously unimaginable level of broad prosperity. It enabled steep increases in education, health, longevity, and economic security.

They explain the mixed economy by an elaboration of Adam Smith’s notion of “the invisible hand”:

The political economist Charles Lindblom once described markets as being like fingers: nimble and dexterous. Governments, with their capacity to exercise authority, are like thumbs: powerful but lacking subtlety and flexibility. The invisible hand is all fingers. The visible hand is all thumbs. Of course, one wouldn’t want to be all thumbs. But one wouldn’t want to be all fingers either. Thumbs provide countervailing power, constraint, and adjustments to get the best out of those nimble fingers.

The authors’ characterisation of the positive role of government is, to my mind, spot on correct. It’s backed up by lots of instructive episodes from American history, going all the way back to the revolutionary founders:

  • Governments provide social coordination of a type that fails to arise by other means of human interaction, such as free markets
  • Markets can accomplish a great deal, but they’re far from all-powerful. Governments ensure that suitable investment takes place of the sort that would not happen, if it was left to each individual to decide by themselves. Governments build up key infrastructure where there is no short-term economic case for individual companies to invest to create it
  • Governments defend the weak from the powerful. They defend those who lack the knowledge to realise that vendors may be on the point of selling them a lemon and then beating a hasty retreat. They take actions to ensure that social free-riders don’t prosper, and that monopolists aren’t able to take disproportionate advantage of their market dominance
  • Governments prevent all the value in a market from being extracted by forceful, well-connected minority interests, in ways that would leave the rest of society impoverished. They resist the power of “robber barons” who would impose numerous tolls and charges, stifling freer exchange of ideas, resources, and people. Therefore governments provide the context in which free markets can prosper (but which those free markets, by themselves, could not deliver).

It’s a deeply troubling development that the positive role of enlightened government is something that is increasingly poorly understood. Instead, as a result of a hostile barrage of ideologically-driven misinformation, more and more people are calling for a reduction in the scope and power of government. This book describes that process as a form of collective “amnesia” (forgetfulness). It was one of the most frightening books I read in 2016.

In describing this book as “frightening”, I don’t mean that the book is bad. Far from it. What’s frightening is the set of information clearly set out in the book:

  • The growing public hostility, especially in America (but shared elsewhere, to an extent) towards the idea that government should be playing any significant role in the well-being of society
  • The growing identification of government with self-serving empire-building bureaucracy
  • The widespread lack of understanding of the remarkable positive history of public action by governments that promoted overall social well-being (that is the “amnesia” of the title of the book)
  • The decades-long growing tendency of many in America – particularly from the Republicans – to denigrate and belittle the role of government, for their own narrow interests
  • The decades-long growing tendency of many others in America to keep quiet, in the face of Republican tirades against government, rather than speaking up to defend it.

I listened to the concluding chapters of American Amnesia during the immediate aftermath of the referendum in the UK on the merits of remaining within the EU. The parallels were chilling:

  • In the EU, the positive role of EU governance has been widely attacked, over many decades, and only weakly defended. This encouraged a widespread popular hostility towards all aspects of EU governance
  • In the US, the positive role of US governance has been widely attacked, over many decades, and only weakly defended. This encouraged a widespread popular hostility towards all aspects of US governance. The commendable ambitions of the Obama government therefore ran into all sorts of bitter opposition.

I wrote the following in July, in a Transpolitica review article “Flawed humanity, flawed politics”:

The parallels might run one step further. To me, and many others, it was almost unthinkable that the referendum in the UK would come down in favour of leaving the EU. Likewise, it’s unthinkable to many in the US that Donald Trump will receive a popular mandate in the forthcoming November elections.

But all bets are off if the electorate (1) Feel sufficiently alienated; (2) Imbibe a powerful sense of grievance towards “the others” who are perceived to run government; (3) Lack a positive understanding of the actual role of big government.

I take no pleasure in what turned out to be the prescience of those remarks. That was a prediction where I did not want to be correct.

And the Weak Suffer What They Must?: Europe’s Crisis and America’s Economic Future

and-the-weak-suffer-what-they-must

Written by Yanis Varoufakis. This book has some striking parallels with American Amnesia: the author provides an gripping survey of many parts of history that have consequences for the present time. Varoufakis focuses on the development of the European Union.

Time and again I discovered in the pages of this book important new aspects of events that I thought I already knew well, but where it turned out there were key connections that I had missed. In short, the book is full of powerful back stories to the current EU situation.

Whilst supporting many of the ideals of the EU, Varoufakis is an incisive critic of many of its aspects. Like the supporters of Brexit, he sees plenty that is deeply dysfunctional about  the current organisation of the EU. However, he believes that fixing the EU is both more practical and more desirable than turning our backs on it, and hoping to benefit from its likely subsequent unravelling. Varoufakis is one the leaders of the DiEM25 movement that describes itself as follows:

DiEM25 is a pan-European, cross-border movement of democrats.

We believe that the European Union is disintegrating. Europeans are losing their faith in the possibility of European solutions to European problems. At the same time as faith in the EU is waning, we see a rise of misanthropy, xenophobia and toxic nationalism.

If this development is not stopped, we fear a return to the 1930s. That is why we have come together despite our diverse political traditions – Green, radical left, liberal – in order to repair the EU. The EU needs to become a realm of shared prosperity, peace and solidarity for all Europeans. We must act quickly, before the EU disintegrates.

I expect the influence of DiEM25 to grow during the next few months, as the public discussion about the future of Europe becomes more contentious. They’re holding a public meeting in London on the evening of Friday 27th January:

A troubled Britain is on its way out of a troubled European Union. Disintegration and xenophobia are in the air. The government in London is in disarray. But so is every other government in Europe, not to mention the European Commission whose authority is tending increasingly towards zero.

The only forces to be gathering strength everywhere are those of what might be called a Nationalist International, spreading their belligerent reach to Trump’s America. Bellicose nativism is on the rise propagating a thinly-veiled discursive ethnic cleansing. Even sections of the Left are succumbing to arguments in favour of retreating behind the nation-state and stricter border controls.

Srećko Horvat, a Croat philosopher, Elif Shafak, renowned Turkish novelist, and Yanis Varoufakis, Greece’s former finance minister, bring to this conversation an intriguing perspective. As intellectuals who know Britain well, they understand first hand the perils of nationalism, disintegration, isolationism and marginalisation. They will place post-Brexit Britain in a context informed by a view of Europe and Britain from the continent’s opposite ‘corner’, sharing insights from Greece’s tensions with Brussels and Berlin, Yugoslavia’s disintegration, and Turkey’s fraught relationship with a Europe that both courts and marginalises it.

Moderated by Owen Jones, a passionate campaigner for a quite different Britain in a quite different Europe, it promises to be an evening that restores confidence in Britain’s and Europe’s humanist and internationalist potential.

I’m looking forward to it!

Red Notice: A True Story of High Finance, Murder, and One Man’s Fight for Justice

red-notice

Written by Bill Browder. Any vision for a better future needs to include an assessment of the power and intentions of the regime of Vladimir Putin in Russia. Since there are many dark clouds on the international horizon, it’s understandable that some thinkers are clutching at straws of hope that Putin could become a reliable partner in the evolution of the international system. Perhaps. But any such thoughts need to be well aware of the horrific dark side of the Kremlin. We would be foolish to risk rose-tinted spectacles in this case.

This book provides ample documentation of many truly shocking abuses of power in Russia. Browder has an intriguing personal back story, which takes up the first half of the book. This part of the book explains how Browder’s investment fund Hermitage Capital, came to be the leading non-Russian participant in Russia companies following the wave of post-Soviet privatisations. It also explains Browder’s fierce legal conflicts with some of the Russian oligarchs, as Browder sought to prevent further fleecing of the assets of companies in which he had invested.

For a while, it seems that Putin supported what Browder was doing. But then Browder became an increasing annoyance to the Kremlin. What happens next is astonishing. Of all the books I read in 2016, this was the most gripping.

Browder is sometimes described as “Putin’s No. 1 enemy”. The book provides considerable justification for that claim. The story is by no means over. Browder continues to speak publicly about his story: I saw him speak at the Wired 2016 event in November in London. I commend the Wired organisers for having the breadth of vision to provide Browder with a key speaking slot.

Politics: Between the Extremes

politics-between-the-extremes

Written by Nick Clegg. Clegg is the former leader of the Liberal Democrat party who was deputy prime minister of the UK from 2010 to 2015. His subsequent fall from power, as the LibDems were trounced in the May 2015 general election, was harsh and bitter. Huge numbers of former supporters of the party turned against it.

Nevertheless, Clegg is one of the most thoughtful politicians in the UK today. His book includes candid assessments of the mistakes he made, and his regrets for not doing things differently. One of the biggest regrets is not paying more attention to matters of communication: the LibDems frequently failed to get the credit for important contributions to the coalition government. As such, the party was out-manoeuvred by more powerful forces.

The book is an eloquent appeal for greater “liberalism” in politics – less certainty and dogmatism, more tolerance of diversity, more openness to new opportunities, and more willingness to embrace tricky coalitions. Despite the notes of sadness in the book, there are real grounds for optimism too.

Clegg comes across in the book the same as I have observed from several public events where I have seen him speak at close hand – as an eminently likeable person, honest about his mistakes, with a passionate belief in better politics and a willingness to build bridges. I’m sure we’ll be seeing more of him in 2017.

Scrum: The Art of Doing Twice the Work in Half the Time

scrum

Written by Jeff Sutherland. No matter how much we improve our foresight skills, we’re still likely to encounter surprises as our projects unfold. The world is full of uncertainty. We therefore need to improve our agility skills in parallel with our foresight skills. Agility gives us the ability to change our focus quickly, in the light of better feedback about likely future scenarios.

Scrum is one of the most influential sets of practice for agile working. This book, by one of the co-creators of Scrum, makes it clear that Scrum has wide applicability beyond the context of software development in which it initially grew to fame. Sutherland provides a host of telling examples of how large, cumbersome projects could be transformed into sleeker, more effective vehicles by the application of Scrum ideas such as sprints, scrum masters, transparency, estimation, waste management, and pivots.

If anyone ever feels overwhelmed by having too much to do – or too much to think about – the ideas in Sutherland’s book could help you break out from being bogged down in analysis-paralysis.

In my own futurist consulting activities, I’m finding that professional audiences are showing increasing interest in the few slides I sometimes include on the topic of “Agile futurism”. Perhaps I ought to flesh out these slides into a new service offering in its own right!

books-2016

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

Blog at WordPress.com.