dw2

20 July 2018

Christopher Columbus and the surprising future of AI

Filed under: AGI, predictability, Singularity — Tags: , , , , — David Wood @ 5:49 pm

There are plenty of critics who are sceptical about the future of AI. The topic has been over-hyped, say these critics. According to these critics, we don’t need to be worried about the longer-term repercussions of AI with superhuman capabilities. We’re many decades – perhaps centuries – from anything approaching AGI (artificial general intelligence) with skills in common sense reasoning matching (or surpassing) that of humans. As for AI destroying jobs, that, too, is a false alarm – or so the critics insist. AI will create at least as many jobs as it destroys.

In my previous blog post, Serious question over PwC’s report on the impact of AI on jobs, I offered some counters to these critics. To my mind, this is no time for complacency: AI could accelerate in its capabilities, and take us by surprise. The kinds of breakthroughs that, in a previous era, might have been expected to take many decades, could actually take place in just a few short years. Rather than burying our head in the sands, denying the possibility of any such acceleration, we need to pay more attention to the trends of technological change and the potential for disruptive new innovations.

The Christopher Columbus angle

Overnight, I’ve been reminded of an argument that I’ve used previously – towards the end of a rather long blogpost. It’s the argument that critics of the future of AI are similar to the critics of Christopher Columbus – the people who said, before his 1492 voyage across the Atlantic in search of a westerly route to Asia, that the effort was bound to be a bad investment.

Bear with me while I retell this analogy.

For years, Columbus tried to drum up support for what most people considered to be a hare-brained scheme. Most observers concluded that Columbus had fallen victim to a significant mistake – he estimated that the distance from the Canary Islands (off the coast of Morocco) to Japan was around 3,700 km, whereas the generally accepted figure was closer to 20,000 km. Indeed, the true size of the sphere of the Earth had been known since the 3rd century BC, due to a calculation by Eratosthenes, based on observations of shadows at different locations.

Accordingly, when Columbus presented his bold proposal to courts around Europe, the learned members of the courts time and again rejected the idea. The effort would be hugely larger than Columbus supposed, they said. It would be a fruitless endeavour.

Columbus, an autodidact, wasn’t completely crazy. He had done a lot of his own research. However, he was misled by a number of factors:

  • Confusion between various ancient units of distance (the “Arabic mile” and the “Roman mile”)
  • How many degrees of latitude the Eurasian landmass occupied (225 degrees versus 150 degrees)
  • A speculative 1474 map, by the Florentine astronomer Toscanelli, which showed a mythical island “Antilla” located to the east of Japan (named as “Cippangu” in the map).

You can read the details in the Wikipedia article on Columbus, which provides numerous additional reference points. The article also contains a copy of Toscanelli’s map, with the true location of the continents of North and South America superimposed for reference.

No wonder Columbus thought his plan might work after all. Nevertheless, the 1490s equivalents of today’s VCs kept saying “No” to his pitches. Finally, spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to take the risk of supporting his adventure. After stopping in the Canaries to restock, the Nina, the Pinta, and the Santa Maria set off westward. Five weeks later, the crew spotted land, in what we now call the Bahamas. And the rest is history.

But it wasn’t the history expected by Columbus, or by his backers, or by his critics. No-one had foreseen that a huge continent existed in the oceans in between Europe and Japan. None of the ancient writers – either secular or religious – had spoken of such a continent. Nevertheless, once Columbus had found it, the history of the world proceeded in a very different direction – including mass deaths from infectious diseases transmitted from the European sailors, genocide and cultural apocalypse, and enormous trade in both goods and slaves. In due course, it would the the ingenuity and initiatives of people subsequently resident in the Americas that propelled humans beyond the Earth’s atmosphere all the way to the moon.

What does this have to do with the future of AI?

Rational critics may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

Just as the contemporaries of Columbus erred in presuming they already knew all the large features of the earth’s continents (after all: if America really existed, surely God would have written about it in the Bible…), modern-day critics of AI can err in presuming they already know all the large features of the landscape of possible artificial minds.

When contemplating the space of all possible minds, some humility is in order. We cannot foretell in advance what configurations of intelligence are possible. We don’t know what may happen, if separate modules of reasoning are combined in innovative ways. After all, there are many aspects of the human mind which are still in doubt.

When critics say that it is unlikely that present-day AI mechanisms will take us all the way to AGI, they are very likely correct. But it would be a horrendous error to draw the conclusion that meaningful new continents of AI capability are inevitably still the equivalent of 20,000 km into the distance. The fact is, we simply don’t know. And for that reason, we should keep an open mind.

One day soon, indeed, we might read news of some new “AUI” having been discovered – some Artificial Unexpected Intelligence, which changes history. It won’t be AGI, but it could have all kinds of unexpected consequences.

Beyond the Columbus analogy

Every analogy has its drawbacks. Here are three ways in which the discovery of an AUI could be different from the discovery by Columbus of America:

  1. In the 1490s, there was only one Christopher Columbus. Nowadays, there are scores (perhaps hundreds) of schemes underway to try to devise new models of AI. Many of these are proceeding with significant financial backing.
  2. Whereas the journey across the Atlantic (and, eventually, the Pacific) could be measured by a single variable (latitude), the journey across the vast multidimensional landscape of artificial minds is much less predictable. That’s another reason to keep an open mind.
  3. Discovering an AUI could drastically transform the future of exploration in the landscape of artificial minds. Assisted by AUI, we might get to AGI much quicker than without it. Indeed, in some scenarios, it might take only a few months after we reach AUI for us (now going much faster than before) to reach AGI. Or days. Or hours.

Footnote

If you’re in or near Birmingham on 11th September, I’ll be giving a Funzing talk on how to assess the nature of the risks and opportunities from superhuman AI. For more details, see here.

 

19 July 2018

Serious questions over PwC’s report on the impact of AI on jobs

Filed under: politics, robots, UBI, urgency — Tags: , , , , — David Wood @ 7:47 pm

A report (PDF) issued on Tuesday by consulting giant PwC has received a lot of favourable press coverage.

Here’s PwC’s own headline summary: “AI and related technologies should create as many jobs as they displace”:

AI and related technologies such as robotics, drones and driverless vehicles could displace many jobs formerly done by humans, but will also create many additional jobs as productivity and real incomes rise and new and better products are developed.

We estimate that these countervailing displacement and income effects on employment are likely to broadly balance each other out over the next 20 years in the UK, with the share of existing jobs displaced by AI (c.20%) likely to be approximately equal to the additional jobs that are created…

BBC News picked up the apparent good news: “AI will create as many jobs as it displaces – report”:

A growing body of research claims the impact of AI automation will be less damaging than previously thought.

Forbes chose this headline: “AI Won’t Kill The Job Market But Keep It Steady, PwC Report Says”:

It’s impossible to say precisely how artificial intelligence will disrupt the job market, so researchers at PwC have taken a bird’s-eye view and pointed to the results of sweeping economic changes.

Their prediction, in a new report out Tuesday, is that it will all balance out in the end.

PwC are to be commended for setting out their reasoning clearly, over 16 pages (p36-p51) in their PDF report.

But three major questions need to be raised about their analysis. These questions throw a different light on the conclusions of the report.

This diagram covers the essence of the model used by PwC:

Q1: How will firms handle the “income effect”?

I agree that automation is likely to generate significant amounts of additional profits, as well as market demand for extra goods and services.

But what’s the reason for assuming that firms will “hire more workers” in response to this demand?

Mightn’t it be more financially attractive to these companies to incorporate more automation instead? Mightn’t more robots be a better investment than more human workers?

The justification for thinking that there will be plenty of new jobs for humans in this scenario, is the assumption that many tasks will remain outside the capability of automation. That is, the analysis depends on humans having skills which cannot be duplicated by AIs, software, robots, or other automation. The assumption is true today, but will it remain true over the next two decades?

PwC’s report points to sectors such as healthcare, social work, education, and science, as areas where jobs are likely to grow over the next twenty years. But that takes us to the second major question.

Q2: What prevents acceleration in the capabilities of AI?

PwC’s report, like many others that mainstream consultancies produce, basically assumes that the AI of 10-15 years time will be a simple extension of today’s AI.

Of course, no one knows for sure how AI will develop over the years ahead. But I see it as irresponsible to neglect scenarios in which AI progresses in leaps and bounds.

Just as the whole field of AI was given a huge shot in the arm by unexpected breakthroughs in the performance of deep learning from around 2012 onwards, we should be open to the possibility of additional breakthroughs in the years ahead, enabled by a combination of the following trends:

  • Huge commercial prizes are awaiting the companies that can improve their AI capabilities
  • Huge military prizes are awaiting the countries that can improve their AI capabilities
  • More developers, entrepreneurs, designers, and systems integrators are active in AI than ever before, exploring an incredible variety of different concepts
  • Increased knowledge of how the human brain operates is being fed into ideas for how to improve AI
  • Cheaper hardware, including easy access to vast cloud computing resources, means that investigations of novel AI models can take place more quickly than before
  • AI can be used to improve some of its own capabilities, in positive feedback loops, and in new “generative adversarial” settings
  • Hardware innovations including new chipset designs and quantum computing could turn today’s crazy ideas into tomorrow’s practical realities.

Today’s AI already shows considerable promise in fields such as transfer learning, artificial creativity, the detection and simulation of emotions, and concept formulation. How quickly will progress occur? My view: slowly, and then quickly.

Q3: How might the “displacement effect” be altered?

In parallel with rating the income effect much more highly than I think is prudent, the PwC analysis offers in my view some dubious reasoning for lowering the displacement effect:

Although we estimate that up to 30% of existing UK jobs could be at high risk of being automated, a job being at “high risk” of being automated does not mean that it will definitely be automated, as there could be a range of economic, legal and regulatory and organisational barriers to the adoption of these new technologies…

We think it is reasonable to scale down our estimates by a factor of two thirds to reflect these barriers, so our central estimate of the proportion of existing jobs that will actually be automated over the next 20 years is reduced to 20%.

Yes, a whole panoply of human factors can alter the speed of the take-up of new technology. But such factors aren’t always brakes. In some circumstances – as perceptions change – they can become accelerators.

Consider if companies in one country (e.g. the UK) are slow to adopt some new technology, but rival companies overseas act more quickly. Declining competitiveness will be one reason for the mindset to change.

A different example: attitudes towards interracial marriages, or towards same-sex marriages, changed slowly for a long time, until they started to change faster.

Q4: What are the consequences of negligent forecasting?

Here’s a bonus question. Does it really matter if PwC get these forecasts wrong? Or is it better to err on the conservative side?

I imagine PwC consultants reasoning along the following lines. Let’s avoid panic. Changes in the job market are likely to be slow in at least the shorter term. Provided that remains the case, the primary pieces of policy advice offered in the report make sense:

Government should invest more in ‘STEAM’ skills that will be most useful to people in this increasingly automated world.

Place-based industrial strategy should target job creation.

The report follows up these recommendations with a different kind of policy advice:

Government should strengthen the safety net for those who find it hard to adjust to technological changes.

But the question is: how much attention should be given, in relative terms, to these two different kinds of advice? Should society put more effort into new training programmes, or in redesigning the prevailing social contract?

So long as the impact of automation on the job market is relatively small, perhaps less effort is needed to work on a better social safety net. But if the impact could be significantly higher, well, many people find that too frightening to contemplate. Hence the desire to sweep such ideas under the carpet – similar to how polite society once avoided using the word “cancer”.

My own view is that the balance of emphasis in the PwC report is the wrong way round. Society urgently needs to anticipate new structures (and new philosophies) that cope with large proportions of the workforce no longer being able to earn income from paid employment.

That’s the argument I made in, for example, my opening remarks in the recent London Futurists conference on UBIA (Universal Basic Income and/or Alternatives):

… and I took the time at the end of the event to back up my assertions with a wider analysis:

To be clear, I see many big challenges in working out how a new post-work social contract will operate – and how society can transition from our present system to this new one. But the fact these tasks are hard, is all the more reason to look at them calmly and carefully. Obscuring the need for these tasks, under a flourish of proposals to increase ‘STEAM’ skills and improve apprentice schemes is, sadly, irresponsible.

Blog at WordPress.com.