dw2

20 July 2018

Christopher Columbus and the surprising future of AI

Filed under: AGI, predictability, Singularity — Tags: , , , , — David Wood @ 5:49 pm

There are plenty of critics who are sceptical about the future of AI. The topic has been over-hyped, say these critics. According to these critics, we don’t need to be worried about the longer-term repercussions of AI with superhuman capabilities. We’re many decades – perhaps centuries – from anything approaching AGI (artificial general intelligence) with skills in common sense reasoning matching (or surpassing) that of humans. As for AI destroying jobs, that, too, is a false alarm – or so the critics insist. AI will create at least as many jobs as it destroys.

In my previous blog post, Serious question over PwC’s report on the impact of AI on jobs, I offered some counters to these critics. To my mind, this is no time for complacency: AI could accelerate in its capabilities, and take us by surprise. The kinds of breakthroughs that, in a previous era, might have been expected to take many decades, could actually take place in just a few short years. Rather than burying our head in the sands, denying the possibility of any such acceleration, we need to pay more attention to the trends of technological change and the potential for disruptive new innovations.

The Christopher Columbus angle

Overnight, I’ve been reminded of an argument that I’ve used previously – towards the end of a rather long blogpost. It’s the argument that critics of the future of AI are similar to the critics of Christopher Columbus – the people who said, before his 1492 voyage across the Atlantic in search of a westerly route to Asia, that the effort was bound to be a bad investment.

Bear with me while I retell this analogy.

For years, Columbus tried to drum up support for what most people considered to be a hare-brained scheme. Most observers concluded that Columbus had fallen victim to a significant mistake – he estimated that the distance from the Canary Islands (off the coast of Morocco) to Japan was around 3,700 km, whereas the generally accepted figure was closer to 20,000 km. Indeed, the true size of the sphere of the Earth had been known since the 3rd century BC, due to a calculation by Eratosthenes, based on observations of shadows at different locations.

Accordingly, when Columbus presented his bold proposal to courts around Europe, the learned members of the courts time and again rejected the idea. The effort would be hugely larger than Columbus supposed, they said. It would be a fruitless endeavour.

Columbus, an autodidact, wasn’t completely crazy. He had done a lot of his own research. However, he was misled by a number of factors:

  • Confusion between various ancient units of distance (the “Arabic mile” and the “Roman mile”)
  • How many degrees of latitude the Eurasian landmass occupied (225 degrees versus 150 degrees)
  • A speculative 1474 map, by the Florentine astronomer Toscanelli, which showed a mythical island “Antilla” located to the east of Japan (named as “Cippangu” in the map).

You can read the details in the Wikipedia article on Columbus, which provides numerous additional reference points. The article also contains a copy of Toscanelli’s map, with the true location of the continents of North and South America superimposed for reference.

No wonder Columbus thought his plan might work after all. Nevertheless, the 1490s equivalents of today’s VCs kept saying “No” to his pitches. Finally, spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to take the risk of supporting his adventure. After stopping in the Canaries to restock, the Nina, the Pinta, and the Santa Maria set off westward. Five weeks later, the crew spotted land, in what we now call the Bahamas. And the rest is history.

But it wasn’t the history expected by Columbus, or by his backers, or by his critics. No-one had foreseen that a huge continent existed in the oceans in between Europe and Japan. None of the ancient writers – either secular or religious – had spoken of such a continent. Nevertheless, once Columbus had found it, the history of the world proceeded in a very different direction – including mass deaths from infectious diseases transmitted from the European sailors, genocide and cultural apocalypse, and enormous trade in both goods and slaves. In due course, it would the the ingenuity and initiatives of people subsequently resident in the Americas that propelled humans beyond the Earth’s atmosphere all the way to the moon.

What does this have to do with the future of AI?

Rational critics may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

Just as the contemporaries of Columbus erred in presuming they already knew all the large features of the earth’s continents (after all: if America really existed, surely God would have written about it in the Bible…), modern-day critics of AI can err in presuming they already know all the large features of the landscape of possible artificial minds.

When contemplating the space of all possible minds, some humility is in order. We cannot foretell in advance what configurations of intelligence are possible. We don’t know what may happen, if separate modules of reasoning are combined in innovative ways. After all, there are many aspects of the human mind which are still in doubt.

When critics say that it is unlikely that present-day AI mechanisms will take us all the way to AGI, they are very likely correct. But it would be a horrendous error to draw the conclusion that meaningful new continents of AI capability are inevitably still the equivalent of 20,000 km into the distance. The fact is, we simply don’t know. And for that reason, we should keep an open mind.

One day soon, indeed, we might read news of some new “AUI” having been discovered – some Artificial Unexpected Intelligence, which changes history. It won’t be AGI, but it could have all kinds of unexpected consequences.

Beyond the Columbus analogy

Every analogy has its drawbacks. Here are three ways in which the discovery of an AUI could be different from the discovery by Columbus of America:

  1. In the 1490s, there was only one Christopher Columbus. Nowadays, there are scores (perhaps hundreds) of schemes underway to try to devise new models of AI. Many of these are proceeding with significant financial backing.
  2. Whereas the journey across the Atlantic (and, eventually, the Pacific) could be measured by a single variable (latitude), the journey across the vast multidimensional landscape of artificial minds is much less predictable. That’s another reason to keep an open mind.
  3. Discovering an AUI could drastically transform the future of exploration in the landscape of artificial minds. Assisted by AUI, we might get to AGI much quicker than without it. Indeed, in some scenarios, it might take only a few months after we reach AUI for us (now going much faster than before) to reach AGI. Or days. Or hours.

Footnote

If you’re in or near Birmingham on 11th September, I’ll be giving a Funzing talk on how to assess the nature of the risks and opportunities from superhuman AI. For more details, see here.

 

19 July 2018

Serious questions over PwC’s report on the impact of AI on jobs

Filed under: politics, robots, UBI, urgency — Tags: , , , , — David Wood @ 7:47 pm

A report (PDF) issued on Tuesday by consulting giant PwC has received a lot of favourable press coverage.

Here’s PwC’s own headline summary: “AI and related technologies should create as many jobs as they displace”:

AI and related technologies such as robotics, drones and driverless vehicles could displace many jobs formerly done by humans, but will also create many additional jobs as productivity and real incomes rise and new and better products are developed.

We estimate that these countervailing displacement and income effects on employment are likely to broadly balance each other out over the next 20 years in the UK, with the share of existing jobs displaced by AI (c.20%) likely to be approximately equal to the additional jobs that are created…

BBC News picked up the apparent good news: “AI will create as many jobs as it displaces – report”:

A growing body of research claims the impact of AI automation will be less damaging than previously thought.

Forbes chose this headline: “AI Won’t Kill The Job Market But Keep It Steady, PwC Report Says”:

It’s impossible to say precisely how artificial intelligence will disrupt the job market, so researchers at PwC have taken a bird’s-eye view and pointed to the results of sweeping economic changes.

Their prediction, in a new report out Tuesday, is that it will all balance out in the end.

PwC are to be commended for setting out their reasoning clearly, over 16 pages (p36-p51) in their PDF report.

But three major questions need to be raised about their analysis. These questions throw a different light on the conclusions of the report.

This diagram covers the essence of the model used by PwC:

Q1: How will firms handle the “income effect”?

I agree that automation is likely to generate significant amounts of additional profits, as well as market demand for extra goods and services.

But what’s the reason for assuming that firms will “hire more workers” in response to this demand?

Mightn’t it be more financially attractive to these companies to incorporate more automation instead? Mightn’t more robots be a better investment than more human workers?

The justification for thinking that there will be plenty of new jobs for humans in this scenario, is the assumption that many tasks will remain outside the capability of automation. That is, the analysis depends on humans having skills which cannot be duplicated by AIs, software, robots, or other automation. The assumption is true today, but will it remain true over the next two decades?

PwC’s report points to sectors such as healthcare, social work, education, and science, as areas where jobs are likely to grow over the next twenty years. But that takes us to the second major question.

Q2: What prevents acceleration in the capabilities of AI?

PwC’s report, like many others that mainstream consultancies produce, basically assumes that the AI of 10-15 years time will be a simple extension of today’s AI.

Of course, no one knows for sure how AI will develop over the years ahead. But I see it as irresponsible to neglect scenarios in which AI progresses in leaps and bounds.

Just as the whole field of AI was given a huge shot in the arm by unexpected breakthroughs in the performance of deep learning from around 2012 onwards, we should be open to the possibility of additional breakthroughs in the years ahead, enabled by a combination of the following trends:

  • Huge commercial prizes are awaiting the companies that can improve their AI capabilities
  • Huge military prizes are awaiting the countries that can improve their AI capabilities
  • More developers, entrepreneurs, designers, and systems integrators are active in AI than ever before, exploring an incredible variety of different concepts
  • Increased knowledge of how the human brain operates is being fed into ideas for how to improve AI
  • Cheaper hardware, including easy access to vast cloud computing resources, means that investigations of novel AI models can take place more quickly than before
  • AI can be used to improve some of its own capabilities, in positive feedback loops, and in new “generative adversarial” settings
  • Hardware innovations including new chipset designs and quantum computing could turn today’s crazy ideas into tomorrow’s practical realities.

Today’s AI already shows considerable promise in fields such as transfer learning, artificial creativity, the detection and simulation of emotions, and concept formulation. How quickly will progress occur? My view: slowly, and then quickly.

Q3: How might the “displacement effect” be altered?

In parallel with rating the income effect much more highly than I think is prudent, the PwC analysis offers in my view some dubious reasoning for lowering the displacement effect:

Although we estimate that up to 30% of existing UK jobs could be at high risk of being automated, a job being at “high risk” of being automated does not mean that it will definitely be automated, as there could be a range of economic, legal and regulatory and organisational barriers to the adoption of these new technologies…

We think it is reasonable to scale down our estimates by a factor of two thirds to reflect these barriers, so our central estimate of the proportion of existing jobs that will actually be automated over the next 20 years is reduced to 20%.

Yes, a whole panoply of human factors can alter the speed of the take-up of new technology. But such factors aren’t always brakes. In some circumstances – as perceptions change – they can become accelerators.

Consider if companies in one country (e.g. the UK) are slow to adopt some new technology, but rival companies overseas act more quickly. Declining competitiveness will be one reason for the mindset to change.

A different example: attitudes towards interracial marriages, or towards same-sex marriages, changed slowly for a long time, until they started to change faster.

Q4: What are the consequences of negligent forecasting?

Here’s a bonus question. Does it really matter if PwC get these forecasts wrong? Or is it better to err on the conservative side?

I imagine PwC consultants reasoning along the following lines. Let’s avoid panic. Changes in the job market are likely to be slow in at least the shorter term. Provided that remains the case, the primary pieces of policy advice offered in the report make sense:

Government should invest more in ‘STEAM’ skills that will be most useful to people in this increasingly automated world.

Place-based industrial strategy should target job creation.

The report follows up these recommendations with a different kind of policy advice:

Government should strengthen the safety net for those who find it hard to adjust to technological changes.

But the question is: how much attention should be given, in relative terms, to these two different kinds of advice? Should society put more effort into new training programmes, or in redesigning the prevailing social contract?

So long as the impact of automation on the job market is relatively small, perhaps less effort is needed to work on a better social safety net. But if the impact could be significantly higher, well, many people find that too frightening to contemplate. Hence the desire to sweep such ideas under the carpet – similar to how polite society once avoided using the word “cancer”.

My own view is that the balance of emphasis in the PwC report is the wrong way round. Society urgently needs to anticipate new structures (and new philosophies) that cope with large proportions of the workforce no longer being able to earn income from paid employment.

That’s the argument I made in, for example, my opening remarks in the recent London Futurists conference on UBIA (Universal Basic Income and/or Alternatives):

… and I took the time at the end of the event to back up my assertions with a wider analysis:

To be clear, I see many big challenges in working out how a new post-work social contract will operate – and how society can transition from our present system to this new one. But the fact these tasks are hard, is all the more reason to look at them calmly and carefully. Obscuring the need for these tasks, under a flourish of proposals to increase ‘STEAM’ skills and improve apprentice schemes is, sadly, irresponsible.

17 July 2018

Would you like your mind expanded?

Filed under: books, healthcare, psychology, religion — Tags: , , , , , — David Wood @ 10:15 pm

Several times while listening to the audio of the recent new book How to Change Your Mind by Michael Pollan, I paused the playback and thought to myself, “wow”.

Pollan is a gifted writer. He strings together words and sentences in a highly elegant way. But my reactions to his book were caused by the audacity of the ideas conveyed, even more than by the powerful rhythms and cadences of the words doing the conveying.

Pollan made his reputation as a writer about food. The most famous piece of advice he offered, earlier in his career, is the seven word phrase “Eat food, not too much, mostly plants”. You might ask: What do you mean by food? Pollan’s answer: “Don’t eat anything your great grandmother wouldn’t recognize as food.”

With such a background, you might not expect any cutting-edge fireworks from Pollan. However, his most recent book bears the provocative subtitle What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence. That’s a lot of big topics. (On reflection, you’ll realise that your great grandmother might have had things to say about all these topics.)

The book covers its material carefully and patiently, from multiple different perspectives. I found it engaging throughout – from the section at the beginning when Pollan explained how he, in his late 50s, became more interested in this field – via sections covering the evolutionary history of mushrooms, thoughtful analyses of Pollan’s own varied experiences with various psychedelics, and the rich mix of fascinating characters in psychedelic history (many larger-than-life, others preferring anonymity) – to sections suggesting big implications for our understanding of mental wellbeing, illnesses of the mind, and the nature of spirituality.

If any of the following catch your interest, I suggest you check out How to Change your Mind:

  • The likely origins of human beliefs about religion
  • Prospects for comprehensive treatments of depression, addiction, and compulsive behaviour
  • The nature of consciousness, the self, and the ego
  • Prospects for people routinely becoming “better than well”
  • Ways in which controversial treatments (e.g. those involving psychedelics) can in due course become accepted by straight-laced regulators from the FDA and the EMA
  • The perils of society collectively forgetting important insights from earlier generations of researchers.

Personally, I particularly enjoyed the sections about William James and Aldous Huxley. I already knew quite a lot about both of them before, but Pollan helped me see their work in a larger perspective. There were many other characters in the book that I learned about for the first time. Perhaps the most astonishing was Al Hubbard. Mind-boggling, indeed.

I see How to Change your Mind as part of a likely tipping point of public acceptability of psychedelics. It’s that well written.

In case it’s not clear, you ought to familiarise yourself with this book if:

  • You consider yourself a futurist – someone who attempts to anticipate key changes in social attitudes and practices
  • You consider yourself a transhumanist – someone interested in extending human experience beyond the ordinary.

22 June 2018

June 24th: A doubly historic day for Symbian

Filed under: smartphones, Smartphones and beyond, Symbian, Symbian Foundation, Symbian Story — Tags: — David Wood @ 11:13 pm

This Sunday will be the 24th of June 2018. It’s a doubly historic day for Symbian – and for the evolution of the smartphone industry.

Twenty years ago, to the day, the birth of Symbian Ltd was announced to the world. My colleague on the very first Symbian Operational Board, Bill Batchelor, urged all employees of the new company to “make a special note in your Agenda”.

Here’s a copy of my own Agenda file from that day – taken from my Psion Series 5mx:

The name “Symbian” had been a carefully guarded secret up to that day. The new company had been referred to, within planning documents with tightly restricted distribution, as “Nova” – representing an astronomically bright object. The very idea of a new company took nearly all employees of Psion (Symbian’s parent) as a surprise that morning.

The thinking behind the creation of the new company was spelt out at an “Impact” meeting in the Metropole Hotel on London’s Edgware Road. To mark the anniversary of this event, it’s an appropriate occasion for me to share some of the slides presented that day:












With the wisdom of hindsight, these slides can be seen as a mixture of powerful vision and naively audacious optimism.

Fast forward exactly ten years, to 24th June 2008. That morning, I was in Cambridge, ready to share news to all Symbian employees there that another huge transformation was to take place in the Symbian universe. Here’s my Agenda entry for that day:

I can, again, convey the essence of the news via a selection of the slides used on that day:






Once again, with the wisdom of hindsight, these slides can be seen as a mixture of powerful vision and naively audacious optimism.

More of our thinking was captured at the time by blogposts written by me (“Symbian 2-0”) and my Symbian Foundation Leadership Team colleague John Forsyth (“Welcome to the future of Symbian”).

The thinking behind the Symbian Foundation also built upon an inspired piece of strategic communication from earlier in 2008, led by Symbian’s CEO from that time, Nigel Clifford. He called it “the Symbian story”:






Did either of these powerful visions, set out ten years apart, have much of a chance of becoming a reality? Opinions still differ on these questions. I’ve set out my own analysis in my book “Smartphones and beyond: lessons from the remarkable rise and fall of Symbian” (published in September 2014).

Footnote

Any former Symbian employee who wishes to take part in some face-to-face reminiscences, and who can be near Symbian’s former headquarters in Boundary Row, Southwark, London, on the evening of Friday 29th June, is welcome to get in touch. Several of us will be gathering, ready to share news and views of what was, and what might have been.

18 June 2018

Politics for normal people: strongly recommended!

Filed under: politics, UBI — Tags: , , — David Wood @ 9:19 am

The first that I heard about Andrew Yang was something like this: a supporter of universal basic income (UBI) wants to become the President of the United States, and has written a book in favour of the idea.

I confess I was a bit sceptical. Most books written by aspiring politicians are lightweight.

However, it turns out that to describe the book Yang has written as simply a book about UBI, is to vastly understate its scope and the power. I’ve just finished listening to the audio version of it, and I’m very impressed.

Yes, the book says sensible, thoughtful things about the likely advantages of UBI, and how it might be paid for. But it says much more than that.

The title of the book is “The war on normal people”. These “normal people” are ones with statistically median characteristics – median education levels, median income, median family circumstances, and so on. These aren’t the people who tend to congregate in the parts of the USA where the economy is still doing well – such as New York, Boston, Silicon Valley, and Seattle. As Yang highlights in chapter after chapter of grim but compelling reading, the prospects for these normal people are bad – if trends continue on their current trajectories.

Yang has an impressive CV of activities he undertook prior to announcing his interest in becoming President. Here’s an extract from his online biography:

I’m not a career politician—I’m an entrepreneur who understands the economy. It’s clear to me, and to many of the nation’s best job creators, that we need to make an unprecedented change, and we need to make it now. But the establishment isn’t willing to take the necessary bold steps…

I was born in upstate New York in 1975. My parents immigrated from Taiwan in the 1960s and met in grad school. My Dad was a researcher at IBM—he generated 69 patents over his career—and my Mom was the systems administrator at a local university. My brother and I grew up pretty nerdy. We also grew up believing in the American Dream—it’s why my parents came here.

I studied economics and political science at Brown and went to law school at Columbia. After a brief stint as a corporate lawyer, I realized it wasn’t for me. I launched a small company in the early days of the internet that didn’t work out, and then worked for a healthcare startup, where I learned how to build a business from more experienced entrepreneurs. In my thirties, I ran a national education company that grew to become #1 in the country. I also met my wife, Evelyn, and got married. My education company was acquired, and with Evelyn’s support, I decided to take my earnings and committed myself to creating jobs in cities hit hard by the financial crisis. By that time I understood the power of entrepreneurship to generate economic growth, so I founded Venture for America (VFA), an organization that helps entrepreneurs create jobs in cities like Baltimore, Detroit, Pittsburgh, and Cleveland.

VFA resonates with so many people because it’s clear there’s a growing problem in the U.S.: automation is destroying jobs and entire regions are being left behind. For years I believed new business formation was the answer—if we could train a new generation of entrepreneurs and create the right jobs in the right places, we could stop the downward spiral of growing income inequality, poverty, unemployment, and hopelessness. VFA created jobs by the thousands and continues to do amazing work across the country. But along the way, it became clear to me that job creation will not outpace the massive impending job loss due to automation. Those days are simply over.

Yang draws on many of his personal experiences in the book. He describes how, despite lots of good intentions from existing political and social leaders, much of the interior of the USA has been heading steadily downhill. The fixes that are often proposed, such as retraining people from one career skillset to another – aren’t going to work on the scale needed.

Hence Yang’s summary:

I fear for the future of our country. New technologies – robots, software, artificial intelligence – have already destroyed more than 4 million US jobs, and in the next 5-10 years, they will eliminate millions more. A third of all American workers are at risk of permanent unemployment. And this time, the jobs will not come back.

Despite the depressing analysis in the opening two thirds of the book, the final sections are full of what I see as credible optimism.

Here’s his headline vision:

As president, my first priority will be to implement Universal Basic Income for every American adult between the ages of 18 and 64: $1,000 a month, no strings attached, paid for by a new tax on the companies benefiting most from automation.

However – of key importance – Yang’s book makes clear that he understands that the financial payments are only a small part of the transformation that needs to be taken.

UBI is just the beginning. A crisis is underway—we have to work together to stop it, or risk losing the heart of our country. The stakes have never been higher.

Yang makes lots of interesting proposals about changes to communities (including systems of “digital social credits”), healthcare, and education. These build on his own experiences within the healthcare and education industries, and resonate with my personal observations from my own career. For example, Yang urges that we shouldn’t think of education and being primarily about preparing someone for employment. Instead, it should be about the development of character.

Anyone else considering running for political office would do well to compare their vision with that of Andrew Yang. I wish him the best of success.

28 May 2018

Tug Life IV: Beware complacency

Filed under: Events, futurist — Tags: — David Wood @ 10:28 am

What does the collision of creativity, media and tech mean for humans?

That’s the overall subject for a series of events, Tug Life IV, being run from 12-15 June by Tug, Shoreditch-based digital marketing agency.

As the ‘IV’ in the name suggests, it’s the fourth year such a series of events have been held – each time, as part of the annual London Tech Week.

I was one of the speakers last year – at Tug Life III – when my topic was “What happens to humans as machines become more embedded in our lives?”

I enjoyed that session so much that I’ve agreed to be one of the speakers in the opening session of Tug Life IV this year. It’s taking place on the morning of Tuesday 12th June, on the topic “What are we doing with technology? Is it good for us? What should we do about it? As individuals? As businesses? As government?”

Other speakers for this session will include representatives from TalkToUs.AI, Microsoft, and Book of the Future. To register to attend, click here. Note that, depending on the availability of tickets, you can sign up for as many – or as few – of the Tug Life IV events as best match your own areas of interest and concern.

Ahead of the event, I answered some questions from Tug’s Olivia Lazenby about the content of the event. Here’s a lightly edited transcript of the conversation:

Q: What will you be talking about at Tug Life IV?

I’ll be addressing the questions, “Is technology good for us? And what should we do about it?”

I’ll be outlining three types of scenarios for the impact of technology on us, over the next 10-25 years:

  1. The first is business as usual: technology has, broadly, been good for us in the past, and will, broadly, continue to be good for us in the future.
  2. The second is: social collapse: technology will get out of hand, and provoke a set of unintended adverse consequences, resulting in humanitarian tragedy.
  3. The third is sustainable abundance for all, in which technology enables a huge positive leap for society and humanity.

In my talk, I’ll be sharing my assessment of the probabilities for these three families of scenario, namely, 10%, 30%, and 60%, respectively.

Q: Would you agree that the conversation around the future of technology has become increasingly polarised and sensationalised?

It’s good that the subject is receiving more airtime than before. But much of the coverage remains, sadly, at a primitive level.

Some of the coverage is playing for shock value – clickbait etc.

Other coverage is motivated by ideologies which are, frankly, well past their sell-by date – ideologies such as biological exceptionalism.

Finally, another distortion is that quite a few of the large mainstream consultancies are seeking to pass on blandly reassuring messages to their clients, in order to bolster their “business as usual” business models. I view much of that advice as irresponsible – similar to how tobacco industry spokespeople used to argue that we don’t know for sure that smoking causes cancer, so let’s keep calm and carry on.

Q: How can we move past the hysteria and begin to truly understand – and prepare for – how technology might shape our lives in the future?

We need to raise step by step the calibre of the conversation about the future. Two keys here are agile futurism and collaborative futurism. There are too many variables involved for any one person – or any one discipline – to be able to figure things out by themselves. The model of Wikipedia is a good one on which to build, but it’s only a start. I’m encouraging people to cooperate in the development of something I call H+Pedia.

My call to action is for people to engage more with the communities of futurists, transhumanists, and singularitarians, who are, thankfully, advancing a collaborative discussion that is progressing objective evaluations of the credibility, desirability, and actionability of key future scenarios. Let’s put aside the distractions of the present in order to more fully appreciate the huge opportunities and huge threats technology is about to unleash.

24 May 2018

Overdue – blog design refresh

Filed under: WordPress — Tags: , — David Wood @ 11:09 am

My training as a software engineer led me to be cautious about making changes in complex software systems. I learned from experience that “simple” changes often had unexpected effects elsewhere in a system.

For that reason, I have shied away, for too long, from a task that needed my attention – improving the readability of this blog.

The font in use on this blog was too small. That made it harder for people to read. Visitors to this blog have mentioned this point on occasion, but I repeatedly put off the task of fixing it.

But this morning, I made the plunge, and found the part of WordPress design that allowed me to change the font. It’s a lot bigger now.

Fearing side-effects, I’ve checked how various previous postings are displayed. So far, I haven’t noticed anything that’s become broken as a result…

… apart from the fact that adjacent lines of text, in multi-line paragraphs, now appear too close together. I’ve tried customising the CSS for this blog, using syntax like the following:

p {
line-height: 1.5;
}

but nothing I’ve typed there has had any effect. Hmm.

Anyway – I hope this change increases your enjoyment of reading this blog!

Footnote: so far as I know, there will be no change to what people see if they are reading these posts via email, or on a mobile device. The change often takes place when using a desktop browser.

“The People vs. Democracy” – a quick review

Filed under: books, politics, RSA — Tags: , , , , — David Wood @ 1:08 am

If you’re interested in politics, then I recommend the video recording of yesterday’s presentation at London’s RSA by Yascha Mounk.

Mounk is a Lecturer on Government at Harvard University, a Senior Fellow at New America, a columnist at Slate, and the host of The Good Fight podcast. He’s also the author of the book “The People vs. Democracy: Why Our Freedom Is in Danger and How to Save It” which I finished reading yesterday. The RSA presentation provides a good introduction to the ideas in the book.

The book marshals a set of arguments in a compelling way, which I hadn’t seen lined up in that way before. It provides some very useful insight as to the challenges being posed to the world by the growth of “illiberal democracy” (“populism”). It also explains why these challenges might be more dangerous than many commentators have tended to assume.

(“Don’t worry about the rise of strong men like Trump”, these commentators say. “Sure, Trump is obnoxious. But the traditions of liberal democracy are strong. The separation of powers are such that the excesses of any would-be autocrat will surely be tamed”. Alas, there’s no “surely” about it.)

Here’s how Mounk’s book is described on its website:

The world is in turmoil. From India to Turkey and from Poland to the United States, authoritarian populists have seized power. As a result, Yascha Mounk shows, democracy itself may now be at risk.

Two core components of liberal democracy—individual rights and the popular will—are increasingly at war with each other. As the role of money in politics soared and important issues were taken out of public contestation, a system of “rights without democracy” took hold. Populists who rail against this say they want to return power to the people. But in practice they create something just as bad: a system of “democracy without rights.”

The consequence, Mounk shows in The People vs. Democracy, is that trust in politics is dwindling. Citizens are falling out of love with their political system. Democracy is wilting away. Drawing on vivid stories and original research, Mounk identifies three key drivers of voters’ discontent: stagnating living standards, fears of multiethnic democracy, and the rise of social media. To reverse the trend, politicians need to enact radical reforms that benefit the many, not the few.

The People vs. Democracy is the first book to go beyond a mere description of the rise of populism. In plain language, it describes both how we got here and where we need to go. For those unwilling to give up on either individual rights or the popular will, Mounk shows, there is little time to waste: this may be our last chance to save democracy.

I liked the book so much that I’ve modified some of the slides in the presentation I’ll be giving myself this evening (Thursday 24th May), to include ideas from Mounk’s work.

One drawback of the book, however, is that the solutions it offers, although “worthy”, seem unlikely to stir sufficient popular engagement to become turned from idea into reality. I believe something bigger is needed.

14 May 2018

The key questions about UBIA

The first few times I heard about the notion of Universal Basic Income (UBI), I said to myself, that’s a pretty dumb idea.

Paying people without them doing any work is going to cause big problems for society, I thought. It’s going to encourage laziness, and discourage enterprise. Why should people work hard, if the fruits of their endeavour are taken away from them to be redistributed to people who can’t be bothered to work? It’s not fair. And it’s a recipe for social decay.

But since my first encounters with the idea of UBI, my understanding has evolved a long way. I have come to see the idea, not as dumb, but as highly important. Anyone seriously interested in the future of human society ought to keep abreast of the discussion about UBI:

  • What are the strengths and (yes) the weaknesses of UBI?
  • What alternatives could be considered, that have the strengths of UBI but avoid its weaknesses?
  • And, bearing in mind that the most valuable futurist scenarios typically involve the convergence (or clash) of several different trend analyses, what related ideas might transform our understanding of UBI?

For these reasons, I am hosting a day-long London Futurists event at Birkbeck College, Central London, on Saturday 2nd June, with the title “Universal Basic Income and/or Alternatives: 2018 update”.

The event is defined by the question,

What do we know, in June 2018, about Universal Basic Income and its alternatives (UBIA), that wasn’t known, or was less clear, just a few years ago?

The event website highlights various components of that question, which different speakers on the day will address:

  • What are the main risks and issues with the concept of UBIA?
  • How might the ideas of UBIA evolve in the years ahead?
  • If not a UBI, what alternatives might be considered, to meet the underlying requirements which have led many people to propose a UBI?
  • What can we learn from the previous and ongoing experiments in Basic Income?
  • What are the feasible systems (new or increased taxes, or other means) to pay for a UBIA?
  • What steps can be taken to make UBIA politically feasible?
  • What is a credible roadmap for going beyond a “basic” income towards enabling attainment of a “universal prosperity” by everyone?

As you can see from the event website, an impressive list of speakers have kindly agreed to take part. Here’s the schedule for the day:

09:30: Doors open
10:00: Chair’s welcome: The questions that deserve the most attention: David Wood
10:15: Opening keynote: Basic Income – Making it happenProf Guy Standing
11:00: Implications of Information TechnologyProf Joanna Bryson
11:30: Alternatives to UBI – Exploring the PossibilitiesRohit TalwarHelena Calle and Steve Wells
12:15: Q&A involving all morning speakers
12:30: Break for lunch (lunch not provided)

14:00: Basic Income as a policy and a perspective: Barb Jacobson
14:30: Implications of Artificial Intelligence on UBIATony Czarnecki
15:00: Approaching the Economic SingularityCalum Chace
15:30: What have we learned? And what should we do next? David Wood
16:00-16:30: Closing panel involving all speakers
16:30: Event closes. Optional continuation of discussion in nearby pub

A dumb idea?

In the run-up to the UBIA 2018 event, I’ll make a number of blogposts anticipating some of the potential discussion on the day.

First, let me return to the question of whether UBI is a dumb idea. Viewing the topic from the angle of laziness vs. enterprise is only one possible perspective. As is often the case, changing your perspective often provides much needed insight.

Instead, let’s consider the perspective of “social contract”. Reflect on the fact that society already provides money to people who aren’t doing any paid work. There are basic pension payments for everyone (so long as they are old enough), basic educational funding for everyone (so long as they are young enough), and basic healthcare provisions for people when they are ill (in most countries of the world).

These payments are part of what is called a “social contract”. There are two kinds of argument for having a social contract:

  1. Self-interested arguments: as individuals, we might need to take personal benefit of a social contract at some stage in the future, if we unexpectedly fall on hard times. What’s more, if we fail to look after the rest of society, the rest of society might feel aggrieved, and rise up against us, pitchforks (or worse) in hand.
  2. Human appreciation arguments: all people deserve basic stability in their life, and a social contract can play a significant part in providing such stability.

What’s harder, of course, is to agree which kind of social contract should be in place. Whole libraries of books have been written on that question.

UBI can be seen as fitting inside a modification of our social contract. It would be part of what supporters say would be an improved social contract.

Note: although UBI is occasionally suggested as a replacement for the entirety of the current welfare system, it is more commonly (and, in my view, more sensibly) proposed as a replacement for only some of the current programmes.

Proponents of UBI point to two types of reason for including UBI as part of a new social contract:

  1. Timeless arguments – arguments that have been advanced in various ways by people throughout history, such as Thomas More (1516), Montesquieu (1748), Thomas Paine (1795), William Morris (1890), Bertrand Russell (1920), Erich Fromm (1955), Martin Luther King (1967), and Milton Friedman (1969)
  2. Time-linked arguments – arguments that foresee drastically changed circumstances in the relatively near future, which increase the importance of adopting a UBI.

Chief among the time-linked arguments are that the direct and indirect effects of profound technological change is likely to transform the work environment in unprecedented ways. Automation, powered by AI that is increasingly capable, may eat into more and more of the skills that we humans used to think are “uniquely human”. People who expected to earn money by doing various tasks may find themselves unemployable – robots will do these tasks more reliably, more cheaply, and with greater precision. People who spend some time retraining themselves in anticipation of a new occupation may find that, over the same time period, robots have gained the same skills faster than humans.

That’s the argument for growing technological unemployment. It’s trendy to criticise this argument nowadays, but I find the criticisms to be weak. I won’t repeat all the ins and outs of that discussion now, since I’ve covered them at some length in Chapter 4 of my book Transcending Politics. (An audio version of this chapter is currently available to listen to, free of charge, here.)

A related consideration talks, not about technological unemployment, but about technological underemployment. People may be able to find paid work, but that work pays considerably less than they expected. Alternatively, their jobs may have many rubbishy aspects. In the terminology of David Graeber, increasing numbers of jobs are “bullshit jobs”. (Graeber will be speaking on that very topic at the RSA this Thursday. At time of writing, tickets are still available.)

Yet another related concept is that of the precariat – people whose jobs are precarious, since they have no guarantee of the number of hours of work they may receive in any one week. People in these positions would often prefer to be able to leave these jobs and spend a considerable period of time training for a different kind of work – or starting a new business, with all the risks and uncertainties entailed. If a UBI were available to them, it would give them the stability to undertake that personal voyage.

How quickly will technological unemployment and technological underemployment develop? How quickly will the proportion of bullshit jobs increase? How extensive and socially dangerous will the precariat become?

I don’t believe any futurist can provide crisp answers to these questions. There are too many unknowns involved. However, equally, I don’t believe anyone can say categorically that these changes won’t occur (or won’t occur any time soon). My personal recommendation is that society needs to anticipate the serious possibility of relatively rapid acceleration of these trends over the next couple of decades. I’d actually put the probability of a major acceleration in these trends over the next 20 years as greater than 50%. But even if you assess the odds more conservatively, you ought to have some contingency plans in mind, just in case the pace quickens more than you expected.

In other words, the time-linked arguments in favour of exploring a potential UBI have considerable force.

As it happens, the timeless arguments may gain increased force too. If it’s true that the moral arc of history bends upwards – if it’s true that moral sensibilities towards our fellow humans increase over the passage of time – then arguments which at one time fell below society’s moral radar can gain momentum in the light of collective experience and deliberative reflection.

An impractical idea?

Many people who are broadly sympathetic to the principle of UBI nevertheless consider the concept to be deeply impractical. For example, here’s an assessment by veteran economics analyst John Kay, in his recent article “Basic income schemes cannot work and distract from sensible, feasible and necessary welfare reforms”:

The provision of a universal basic income at a level which would provide a serious alternative to low-paid employment is impossibly expensive. Thus, a feasible basic income cannot fulfil the hopes of some of the idea’s promoters: it cannot guarantee households a standard of living acceptable in a modern society, it cannot compensate for the possible disappearance of existing low-skilled employment and it cannot eliminate “bullshit jobs”. Either the level of basic income is unacceptably low, or the cost of providing it is unacceptably high. And, whatever the appeal of the underlying philosophy, that is essentially the end of the matter.

Kay offers this forthright summary:

Attempting to turn basic income into a realistic proposal involves the reintroduction of elements of the benefit system which are dependent on multiple contingencies and also on income and wealth. The outcome is a welfare system which resembles those that already exist. And this is not surprising. The complexity of current arrangements is not the result of bureaucratic perversity. It is the product of attempts to solve the genuinely difficult problem of meeting the variety of needs of low-income households while minimising disincentives to work for households of all income levels – while ensuring that the system established for that purpose is likely to sustain the support of those who are required to pay for it.

I share Piachaud’s conclusion that basic income is a distraction from sensible, feasible and necessary welfare reforms. As in other areas of policy, it is simply not the case that there are simple solutions to apparently difficult issues which policymakers have hitherto been too stupid or corrupt to implement.

Supporters of UBI have rebuttals to this analysis. Some of these rebuttals will no doubt be presented at the UBIA 2018 event on 2nd June.

One rebuttal seeks to rise above “zero sum” considerations. Injecting even a small amount of money into everyone’s hands can have “multiplier” effects, as that new money passes in turn through several people’s hands. One person’s spending is another person’s income, ready for them to spend in turn.

Along similar lines, Professor Guy Standing, who will be delivering the opening keynote at UBIA 2018, urges readers of his book Basic Income: And How We Can Make It Happen to consider positive feedback cycles: “the likely impact of extra spending power on the supply of goods and services”. As he says,

In developing countries, and in low-income communities in richer countries, supply effects could actually lower prices for basic goods and services. In the Indian basic income pilots, villagers’ increased purchasing power led local farmers to plant more rice and wheat, use more fertilizer and cultivate more of their land. Their earnings went up, while the unit price of the food they supplied went down. The same happened with clothes, since several women found it newly worthwhile to buy sewing machines and material. A market was created where there was none before.

A similar response could be expected in any community where there are people who want to earn more and do more, alongside people wanting to acquire more goods and services to improve their living standard.

(I am indebted to Standing’s book for many other insights that have influenced my thinking and, indeed, points raised in this blogpost. It’s well worth reading!)

There’s a broader point that needs to be raised, about the “prices for basic goods and services”. Since a Basic Income needs to cover payments for these goods and services, two approaches are possible:

  1. Seek to raise the level of Basic Income payments
  2. Seek to lower the cost of basic goods and services.

I believe both approaches should be pursued in parallel. The same technologies of automation that pose threats to human employment also hold the promise for creating goods and services at significantly lower costs (and with higher quality). However, any such reduction in cost sits in tension with the prevailing societal focus on boosting economic prices (and increasing GDP). It is for this reason that we need a change of societal values as well as changes in the mechanics of the social contract.

The vision of goods and services having prices approaching zero is, by the way, sometimes called “the Star Trek economy”. Futurist Calum Chace – another of the UBIA 2018 speakers – addresses this topic is his provocatively titled book The Economic Singularity: Artificial intelligence and the death of capitalism. Here’s an extract from one of his blogposts, a “un-forecast” (Chace’s term) for a potential 2050 scenario, “Future Bites 7 – The Star Trek Economy”, featuring Lauren (born 1990):

The race downhill between the incomes of governments and the costs they needed to cover for their citizens was nerve-wracking for a few years, but by the time Lauren hit middle age it was clear the outcome would be good. Most kinds of products had now been converted into services, so cars, houses, and even clothes were almost universally rented rather than bought: Lauren didn’t know anyone who owned a car. The cost of renting a car for a journey was so close to zero that the renting companies – auto manufacturers or AI giants and often both – generally didn’t bother to collect the payment. Money was still in use, but was becoming less and less necessary.

As a result, the prices of most asset classes had crashed. Huge fortunes had been wiped out as property prices collapsed, especially in the hot-spot cities, but few people minded all that much as they could get whatever they needed so easily.

As you may have noticed, the vision of a potential future “Star Trek” economy is part of the graphic design for UBIA 2018.

I’ll share one further comment on the question of the affordability of UBI. Specifically, I’ll quote some comments made by Guardian writer Colin Holtz in the wake of the discovery of the extent of tax evasion revealed by the Panama Papers. The article by Holtz has the title “The Panama Papers prove it: America can afford a universal basic income”. Here’s an extract:

If the super-rich actually paid what they owe in taxes, the US would have loads more money available for public services.

We should all be able to agree: no one should be poor in a nation as wealthy as the US. Yet nearly 15% of Americans live below the poverty line. Perhaps one of the best solutions is also one of the oldest and simplest ideas: everyone should be guaranteed a small income, free from conditions.

Called a universal basic income by supporters, the idea has has attracted support throughout American history, from Thomas Paine to Martin Luther King Jr. But it has also faced unending criticism for one particular reason: the advocates of “austerity” say we simply can’t afford it – or any other dramatic spending on social security.

That argument dissolved this week with the release of the Panama Papers, which reveal the elaborate methods used by the wealthy to avoid paying back the societies that helped them to gain their wealth in the first place…

While working and middle-class families pay their taxes or face consequences, the Panama Papers remind us that the worst of the 1% have, for years, essentially been stealing access to Americans’ common birthright, and to the benefits of our shared endeavors.

Worse, many of those same global elite have argued that we cannot afford to provide education, healthcare or a basic standard of living for all, much less eradicate poverty or dramatically enhance the social safety net by guaranteeing every American a subsistence-level income.

The Tax Justice Network estimates the global elite are sitting on $21–32tn of untaxed assets. Clearly, only a portion of that is owed to the US or any other nation in taxes – the highest tax bracket in the US is 39.6% of income. But consider that a small universal income of $2,000 a year to every adult in the US – enough to keep some people from missing a mortgage payment or skimping on food or medicine – would cost only around $563bn each year.

This takes us from the question of affordability to the question of political feasibility. Read on…

A politically infeasible idea?

A potential large obstacle to adopting UBI is that powerful entities within society will fight hard against it, being opposed to any idea of increased taxation and a decline in their wealth. These entities don’t particularly care that the existing social contract provides a paltry offering to the poor and precarious in society – or to those “inadequates” who happen to lose their jobs and their standing in the economy. The existing social contract provides them personally (and those they consider their peers) with a large piece of the cake. They’d like to keep things that way, thank you very much.

They defend the current setup with ideology. The ideology states that they deserve their current income and wealth, on account of the outstanding contributions they have made to the economy. They have created jobs, or goods, or services of one sort or another, that the marketplace values. And no-one has any right to take their accomplishments away from them.

In other words, they defend the status quo with a theory of value. In order to overcome their resistance to UBIA, I believe we’ll need to tackle this theory of value head on, and provide a better theory in its place. I’ll pick up that thread of thought shortly.

But an implementation of UBI doesn’t need to happen “big bang” style, all at once. It can proceed in stages, starting with a very low level, and (all being well) ramping up from there in phases. The initial payment from UBI could be funded from new types of tax that would, in any case, improve the health of society:

  • A tax on financial transactions (sometimes called a “Tobin tax”) – that will help to put the brakes on accelerated financial services taking place entirely within the financial industry (without directly assisting the real economy)
  • A “Greenhouse gas tax” (such as a “carbon tax”) on activities that generate greenhouse gas pollution.

Continuing the discussion

The #ubia channel in the newly created London Futurists Slack workspace awaits comments on this topic. For a limited time, members and supporters of London Futurists can use this link to join that workspace.

5 May 2018

Humans: The solution, or the problem?

Filed under: Transcending Politics — Tags: , , , — David Wood @ 3:33 pm

Silicon Valley seems to think that we’re somehow going to compensate for humanity’s faults with digital technologies. I don’t think humans are obsolete. I don’t think humans are the problem, I think humans are the solution.

These words reached my inbox earlier today, as part of a Nesta interview of technology writer Douglas Rushkoff.

The sentiment expressed in these words strikes me as naive – dangerously naive.

Any worldview that ignores the problematic aspects of human nature risks unwittingly enabling the magnification of these flaws, as technology puts ever more power in our hands.

Think of the way that Fox News, with the support of a network of clever social media agitators, has been magnifying many of the uglier human inclinations – resulting in the human calamity of Trumpistan. That’s an example of what can happen if the flaws within humanity aren’t properly handled. It’s an example of twenty first technology making humans problems worse.

Just because we can, correctly, assess humans as having a great deal of positive potential, this doesn’t mean we should become blind to the harmful tendencies that coexist with our favourable tendencies – and which (if we’re not careful) might overwhelm these tendencies.

Here are some examples of our harmful tendencies:

Conflict

  • Abuse of power: we humans are often too ready to exploit the power we temporarily hold, for example in personal relationships with subordinates or colleagues
  • Confirmation bias: we divert our attention from information that would challenge or negate our own pet theories or the commonly accepted paradigms of our culture; we clutch at any convenient justification for ignoring or distorting such information
  • Dysfunctional emotions: we are prone to being dominated by emotional spasms – of anger, self-righteousness, possessiveness, anxiety, despair, etc – to the extent that we are often unable to act on our better judgements
  • Overconfidence: we tend to assess ourselves as having above-average abilities; we also often assume that our core beliefs are more likely to be true than an objective evaluation would suggest
  • In-group preference: we are liable to prejudice in favour of people who seem “like us” (by whatever criteria), and against people who appear to fall outside our group; this drives unnecessary conflict, and can also mean we miss the best opportunities
  • Inertia: we cling onto possessions, habits, and processes that have served us well in the past, and which might conceivably be useful to us at some time in the future, even if these attachments reduce our room for manoeuvre or damage our openness to new experiences
  • Herd mentality: we too readily fall into line with what we perceive our peers are thinking or doing, even though our conscience is telling us that a different path would be better
  • Loss of perspective: we fail to pay attention to matters that should be of long-term importance to us, and instead become dominated by grudges, personal vindictiveness, fads, and other distractions.

Many of these characteristics are likely to have bestowed some evolutionary advantage to our ancestors, in the very different circumstances in which they lived – similar to the way that a sweet tooth made good sense in prehistoric times. These characteristics are far less useful in today’s world, with its vastly increased complexity and connectivity, where individual mistakes can be magnified onto a global scale.

Other characteristics on the list probably never had much direct utility, but they existed as side-effects of yet other character traits that were themselves useful. Evolution was constrained in terms of the character sets it could create; it lacked complete flexibility. However, we humans possess a much greater range of engineering tools. That opens the way for the conscious, thoughtful re-design of our character set.

The project described in the article that caught my attention this morning – the “Team Human” project – needs in my view to be more open to what some in Silicon Valley are proposing (but which the article scorns), namely the use of technology to assist:

  • The strengthening of positive human tendencies
  • The taming of negative human tendencies.

Of course, technology cannot do these things by itself. But it can, very definitely, be part of the solution. Some examples:

  • Education of all sorts can be enhanced by technology such as interactive online video courses that adapt their content to the emerging needs of each different user
  • Vivid experiences within multi-sensory virtual reality worlds can bring home to people the likely consequences of their current personal trajectories (from both first-person and third-person points of view), and allow them to rehearse changes in attitude
  • The reasons why meditation, yoga, and hypnosis can have beneficial results are now more fully understood than before, enabling major improvements in the efficacy of these practices
  • Prompted by alerts generated by online intelligent assistance software, real-world friends can connect at critical moments in someone’s life, in order to provide much-needed personal support
  • Information analytics can resolve some of the long-running debates about which diets – and which exercise regimes – are the ones that will best promote all-round health for given individuals.

And there are some more radical possibilities:

  • New pharmacological compounds – sometimes called “smart drugs”
  • Gentle stimulation of the brain by a variety of electromagnetic methods – something that has been trialled by the US military
  • Alteration of human biology more fundamentally, by interventions at the genetic, epigenetic, or microbiome level
  • The use of intelligent assistance software that monitors our actions and offers us advice in a timely manner, similar to the way that a good personal friend will occasionally volunteer wise counsel; intelligent assistants can also strengthen our positive characteristics by wise selection of background music, visual imagery, and “thought for the day” aphorisms to hold in mind.

What I’m describing here is the vision of transhumanism – the vision that humanity can and should take wise and profound advantage of technology to transcend the damaging limitations and drawbacks imposed by the current circumstances of human nature. As a result, humans will be able to transition, individually and collectively, towards a significantly higher stage of life – a life with much improved quality.

And here’s a formulation from 1990 by the founder of the modern transhumanist movement, philosopher Max More:

Transhumanism is a class of philosophies of life that seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values.

Any attempt to “reprogram society to better serve humans” that fails to follow this transhumanist advice – any project that turns its back on the radical transformational potential of science and technology – is leaving itself dangerously underpowered.

In short: the journey to a healthier society inevitably involves transhumanism. Without transhumanism, Team Human isn’t going to make it.

Note: For a fuller examination of the ideas in this blogpost, see my recent new book Transcending Politics, especially Chapter 12,  “Humans and Superhumans” and Chapter 1, “Vision and roadmap”.

Picture source: TheDigitalArtist and JoeTheStoryTeller.

Older Posts »

Blog at WordPress.com.