dw2

10 October 2015

Technological unemployment – Why it’s different this time

On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:

  • To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
  • To what extent should humans be anxious about this process?

The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:

Channel4_HumansAdvertisingHoarding-440x293

“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.

Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.

I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.

The master algorithm

That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.

TheMasterAlgorithm

The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:

  • The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
  • The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.

For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.

As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:

  • “Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
  • “Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
  • “Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
  • “Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
  • “Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.

(See slide 6 of this Slideshare presentation. Indeed, take the time to view the full presentation. Better again, read Domingos’ entire book.)

What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.

And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.

Incidentally, Domingos has set out some ambitious goals for what his book will accomplish:

The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.

Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.

People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.

Rise of the robots

There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.

That’s one of the arguments made by perhaps the best writer so far on technological unemployment, Martin Ford. Ford’s recent book “Rise of the Robots: Technology and the Threat of a Jobless Future” builds ably on what previous writers have said.

RiseofRobots

Here’s a sample of review comments about Ford’s book:

Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument.
Los Angeles Times

If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent.
Financial Times, Summer books 2015, Business, Andrew Hill

[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right.
Daily Beast

Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines.
Fast Company

Well-researched and disturbingly persuasive.
Financial Times

Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future.
—Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think

Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read.
—Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes

If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.

I summarise the takeaway message in this picture, taken from one of my Delta Wisdom workshop presentations:

Tech unemployment curves

  • Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
  • However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
  • Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.

A worked example: a site carpenter

During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?

One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.

My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):

As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.

The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.

What does it mean to be human?

For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:

  • Should these changes be welcomed, rather than feared?
  • What will these forthcoming changes imply for our conception of what it means to be human?

To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.

If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.

Some of these topics were aired at a conference in New York City on 29th September: “The World Summit on Technological Unemployment”, that was run by Jim Clark’s World Technology Network.

Robotic Steel Workers

One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:

We are not facing a future without work. We are facing a future without jobs.

There is a huge difference between the two, and we must start seeing the difference, and making the difference more clear to each other.

In his blogpost “Jobs, Work, and Universal Basic Income”, Santens continues the argument as follows:

When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.

Put another way, extrinsically motivated work is work done before or after an expected payment. It’s an exchange. Intrinsically motivated work is work only made possible by sufficient access to money. It’s a gift.

The difference between these two forms of work cannot be overstated…

Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…

What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.

What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:

The RSA will be advocating support for Basic Income… in response to Technological Unemployment.

(This comment comes about 2/3 of the way through the podcast.)

To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.

Life is being redefined

Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.

Janna Anderson keynote

You can find Anderson’s article here. It starts as follows:

The Robot Takeover is Already Here

The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.

It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.

The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.

By way of summary, I’ll pull out a few sentences from the middle of the article:

One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…

It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…

[Another] thing is certain: Life is being redefined.

Who controls the robots?

Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.

That’s a theme I’ll be addressing next Sunday, 18th October, at a lunchtime session of the 2015 Battle of Ideas. The session is entitled “Man vs machine: Who controls the robots”.

robots

Here’s how the session is described:

From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.

On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…

Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?

 As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.

It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?

In line with my recent essay “Four political futures: which will you choose?”, I’ll be arguing for a technoprogressive approach to control, rather than a technolibertarian one.

Four futures

I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:

28 April 2015

Why just small fries? Why no big potatoes?

Filed under: innovation, politics, Transpolitica, vision — Tags: , , , , — David Wood @ 3:12 pm

Big potatoesLast night I joined a gathering known as “Big Potatoes”, for informal discussion over dinner at the De Santis restaurant in London’s Old Street.

The potatoes in question weren’t on the menu. They were the potential big innovations that politicians ought to be contemplating.

The Big Potatoes group has a tag-line: “The London Manifesto for Innovation”.

As their website states,

The London Manifesto for Innovation is a contribution to improving the climate for innovation globally.

The group first formed in the run-up to the previous UK general election (2010). I blogged about them at that time, here, when I listed the principles from their manifesto:

  • We should “think big” about the potential of innovation, since there’s a great deal that innovation can accomplish;
  • Rather than “small is beautiful” we should keep in mind the slogan “scale is beautiful”;
  • We should seek more than just a continuation of the “post-war legacy of innovation” – that’s only the start;
  • Breakthrough innovations are driven by new technology – so we should prioritise the enablement of new technology;
  • Innovation is hard work and an uphill struggle – so we need to give it our full support;
  • Innovation arises from pure scientific research as well as from applied research – both are needed;
  • Rather than seeking to avoid risk or even to manage risk, we have to be ready to confront risk;
  • Great innovation needs great leaders of innovation, to make it happen;
  • Instead of trusting regulations, we should be ready to trust people;
  • Markets, sticks, carrots and nudges are no substitute for what innovation itself can accomplish.

That was 2010. What has caused the group to re-form now, in 2015, is the question:

Why is so much of the campaigning for the 2015 election preoccupied with small fries, when it could – and should – be concentrating on big potatoes?

Last night’s gathering was facilitated by three of the writers of the 2010 big potato manifestoNico MacdonaldJames Woudhuysen, and Martyn Perks. The Chatham House rules that were in place prevents me from quoting directly from the participants. But the discussion stirred up plenty of thoughts in my own mind, which I’ll share now.

The biggest potato

FreemanDysonI share the view expressed by renowned physicist Freeman Dyson, in the book “Infinite in all directions” from his 1985 Gifford lectures:

Technology is… the mother of civilizations, of arts, and of sciences

Technology has given rise to enormous progress in civilization, arts and sciences over recent centuries. New technology is poised to have even bigger impacts on civilization in the next 10-20 years. So why aren’t politicians paying more attention to it?

MIT professor Andrew McAfee takes up the same theme, in an article published in October last year:

History teaches us that nothing changes the world like technology

McAfee spells out a “before” and “after” analysis. Here’s the “before”:

For thousands of years, until the middle of the 18th century, there were only glacial rates of population growth, economic expansion, and social development.

And the “after”:

Then an industrial revolution happened, centred around James Watt’s improved steam engine, and humanity’s trajectory bent sharply and permanently upward

AndrewMcAfeeOne further quote from McAfee’s article rams home the conclusion:

Great wars and empires, despots and democrats, the insights of science and the revelations of religion – none of them transformed lives and civilizations as much as a few practical inventions

Inventions ahead

In principle, many of the grave challenges facing society over the next ten years could be solved by “a few practical inventions”:

  • Students complain, with some justification, about the costs of attending university. But technology can enable better MOOCs – Massive Online Open Courses – that can deliver high quality lectures, removing significant parts of the ongoing costs of running universities; free access to such courses can do a lot to help everyone re-skill, as new occupational challenges arise
  • With one million people losing their lives to traffic accidents worldwide every year, mainly caused by human driver error, we should welcome the accelerated introduction of self-driving cars
  • Medical costs could be reduced by greater application of the principles of preventive maintenance (“a stitch in time saves nine”), particularly through rejuvenation biotechnology and healthier diets
  • A sustained green tech new deal should push society away from dependency on fuels that emit dangerous amounts of greenhouse gases, resulting in lifestyles that are positive for the environment as well as positive for humanity
  • The growing costs of governmental bureaucracy itself could be reduced by whole-heartedly embracing improved information technology and lean automation.

Society has already seen remarkable changes in the last 10-20 years as a result of rapid progress in fields such as electronics, computers, digitisation, and automation. In each case, the description “revolution” is appropriate. But even these revolutions pale in significance to the changes that will, potentially, arise in the next 10-20 years from extraordinary developments in healthcare, brain sciences, atomically precise manufacturing, 3D printing, distributed production of renewable energy, artificial intelligence, and improved knowledge management.

Indeed, the next 10-20 years look set to witness four profound convergences:

  • Between artificial intelligence and human intelligence – with next generation systems increasingly embodying so-called “deep learning”, “hybrid intelligence”, and even “artificial emotional intelligence”
  • Between machine and human – with smart technology evolving from “mobile” to “wearable” and then to “insideable”, and with the emergence of exoskeletons and other cyborg technology
  • Between software and biology – with programming moving from silicon (semiconductor) to carbon (DNA and beyond), with the expansion of synthetic biology, and with the application of genetic engineering
  • Between virtual and physical – with the prevalence of augmented reality vision systems, augmented reality education via new MOOCs (massive open online courses), cryptocurrencies that remove the need for centralised audit authorities, and lots more.

To take just one example: Wired UK has just reported a claim by Brad Perkins, chief medical offer at Human Longevity Inc., that

A “supercharged” approach to human genome research could see as many health breakthroughs made in the next decade as in the previous century

The “supercharging” involves taking advantage of four converging trends:

“I don’t have a pill” to boost human lifespan, Perkins admitted on stage at WIRED Health 2015. But he has perhaps the next best thing — data, and the means to make sense of it. Based in San Diego, Human Longevity is fixed on using genome data and analytics to develop new ways to fight age-related diseases.

Perkins says the opportunity for humanity — and Human Longevity — is the result of the convergence of four trends: the reduction in the cost of genome sequencing (from $100m per genome in 2000, to just over $1,000 in 2014); the vast improvement in computational power; the development of large-scale machine learning techniques; and the wider movement of health care systems towards ‘value-based’ models. Together these trends are making it easier than ever to analyse human genomes at scale.

Small fries

french-fries-525005_1280Whilst entrepreneurs and technologists are foreseeing comprehensive solutions to age-related diseases – as well as the rise of smart automation that could free almost every member of the society of the need to toil in employment that they dislike – what are politicians obsessing about?

Instead of the opportunities of tomorrow, politicians are caught up in the challenges of yesteryear and today. Like a short-sighted business management team obsessed by the next few quarterly financial results but losing sight of the longer term, these politicians are putting all their effort into policies for incremental changes to present-day metrics – metrics such as tax thresholds, the gross domestic product, policing levels, the degree of privatisation in the health service, and the rate of flow of migrants from Eastern Europe into the United Kingdom.

It’s like the restricted vision which car manufacturing pioneer Henry Ford is said to have complained about:

If I had asked people what they wanted, they would have said faster horses.

This is light years away from leadership. It’s no wonder that electors are deeply dissatisfied.

The role of politics

To be clear, I’m not asking for politicians to dictate to entrepreneurs and technologists which products they should be creating. That’s not the role of politicians.

However, politicians should be ensuring that the broad social environment provides as much support as possible to:

  • The speedy, reliable development of those technologies which have the potential to improve our lives so fully
  • The distribution of the benefits of these technologies to all members of society, in a way that preserves social cohesion without infringing individual liberties
  • Monitoring for risks of accidental outcomes from these technologies that would have disastrous unintended consequences.

PeterDruckerIn this way, politicians help to address the human angle to technology. It’s as stated by management guru Peter Drucker in his 1986 book “Technology, Management, and Society”:

We are becoming aware that the major questions regarding technology are not technical but human questions.

Indeed, as the Transpolitica manifesto emphasises:

The speed and direction of technological adoption can be strongly influenced by social and psychological factors, by legislation, by subsidies, and by the provision or restriction of public funding.

Political action can impact all these factors, either for better or for worse.

The manifesto goes on to set out its objectives:

Transpolitica wishes to engage with politicians of all parties to increase the likelihood of an attractive, equitable, sustainable, progressive future. The policies we recommend are designed:

  • To elevate the thinking of politicians and other leaders, away from being dominated by the raucous issues of the present, to addressing the larger possibilities of the near future
  • To draw attention to technological opportunities, map out attractive roads ahead, and address the obstacles which are preventing us from fulfilling our cosmic potential.

Specific big potatoes that are missing from the discussion

If our political leaders truly were attuned to the possibilities of disruptive technological change, here’s a selection of the topics I believe would find much greater prominence in political discussion:

  1. How to accelerate lower-cost high quality continuous access to educational material, such as MOOCs, that will prepare people for the radically different future that lies ahead
  2. How to accelerate the development of personal genome healthcare, stem cell therapies, rejuvenation biotech, and other regenerative medicine, in order to enable much healthier people with much lower ongoing healthcare costs
  3. How to ensure that a green tech new deal succeeds, rather than continues to fall short of expectations (as it has been doing for the last 5-6 years)
  4. How to identify and accelerate the new industries where the UK can be playing a leading role over the next 5-10 years
  5. How to construct a new social contract – perhaps involving universal basic income – in order to cope with the increased technological unemployment which is likely to arise from improved automation
  6. How society should be intelligently assessing any new existential risks that emerging technologies may unintentionally trigger
  7. How to transition the network of bodies that operate international governance to a new status that is fit for the growing challenges of the coming decades (rather than perpetuating the inertia from the times of their foundations)
  8. How technology can involve more people – and more wisdom and insight from more people – in the collective decision-making that passes for political processes
  9. How to create new goals for society that embody a much better understanding of human happiness, human potential, and human flourishing, rather than the narrow economic criteria that currently dominate decisions
  10. How to prepare everyone for the next leaps forward in human consciousness which will be enabled by explorations of both inner and outer space.

Why small fries?

But the biggest question of all isn’t anything I’ve just listed. It’s this:

  • Why are politicians still stuck in present-day small fries, rather than focusing on the big potatoes?

I’ll be interested in answers to that question from readers. In the meantime, here are my own initial thoughts:

  • The power of inertia – politicians, like the rest of us, tend to keep doing what they’re used to doing
  • Too few politicians have any deep personal insight (from their professional background) into the promise (and perils) of disruptive technology
  • The lack of a specific vision for how to make progress on these Big Potato questions
  • The lack of clamour from the electorate as a whole for answers on these Big Potato questions.

If this is true, we must expect it will take some time for public pressure to grow, leading politicians in due course to pay attention to these topics.

It will be like the growth in capability of any given exponential technology. At first, development takes a long time. It seems as if nothing much is changing. But finally, tipping points are reached. At that stage, it become imperative to act quickly. And at that stage, politicians (and their advisors) will be looking around urgently for ready-made solutions they can adapt from think tanks. So we should be ready.

Blog at WordPress.com.