dw2

10 October 2015

Technological unemployment – Why it’s different this time

On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:

  • To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
  • To what extent should humans be anxious about this process?

The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:

Channel4_HumansAdvertisingHoarding-440x293

“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.

Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.

I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.

The master algorithm

That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.

TheMasterAlgorithm

The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:

  • The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
  • The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.

For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.

As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:

  • “Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
  • “Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
  • “Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
  • “Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
  • “Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.

(See slide 6 of this Slideshare presentation. Indeed, take the time to view the full presentation. Better again, read Domingos’ entire book.)

What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.

And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.

Incidentally, Domingos has set out some ambitious goals for what his book will accomplish:

The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.

Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.

People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.

Rise of the robots

There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.

That’s one of the arguments made by perhaps the best writer so far on technological unemployment, Martin Ford. Ford’s recent book “Rise of the Robots: Technology and the Threat of a Jobless Future” builds ably on what previous writers have said.

RiseofRobots

Here’s a sample of review comments about Ford’s book:

Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument.
Los Angeles Times

If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent.
Financial Times, Summer books 2015, Business, Andrew Hill

[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right.
Daily Beast

Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines.
Fast Company

Well-researched and disturbingly persuasive.
Financial Times

Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future.
—Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think

Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read.
—Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes

If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.

I summarise the takeaway message in this picture, taken from one of my Delta Wisdom workshop presentations:

Tech unemployment curves

  • Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
  • However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
  • Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.

A worked example: a site carpenter

During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?

One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.

My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):

As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.

The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.

What does it mean to be human?

For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:

  • Should these changes be welcomed, rather than feared?
  • What will these forthcoming changes imply for our conception of what it means to be human?

To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.

If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.

Some of these topics were aired at a conference in New York City on 29th September: “The World Summit on Technological Unemployment”, that was run by Jim Clark’s World Technology Network.

Robotic Steel Workers

One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:

We are not facing a future without work. We are facing a future without jobs.

There is a huge difference between the two, and we must start seeing the difference, and making the difference more clear to each other.

In his blogpost “Jobs, Work, and Universal Basic Income”, Santens continues the argument as follows:

When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.

Put another way, extrinsically motivated work is work done before or after an expected payment. It’s an exchange. Intrinsically motivated work is work only made possible by sufficient access to money. It’s a gift.

The difference between these two forms of work cannot be overstated…

Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…

What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.

What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:

The RSA will be advocating support for Basic Income… in response to Technological Unemployment.

(This comment comes about 2/3 of the way through the podcast.)

To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.

Life is being redefined

Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.

Janna Anderson keynote

You can find Anderson’s article here. It starts as follows:

The Robot Takeover is Already Here

The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.

It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.

The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.

By way of summary, I’ll pull out a few sentences from the middle of the article:

One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…

It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…

[Another] thing is certain: Life is being redefined.

Who controls the robots?

Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.

That’s a theme I’ll be addressing next Sunday, 18th October, at a lunchtime session of the 2015 Battle of Ideas. The session is entitled “Man vs machine: Who controls the robots”.

robots

Here’s how the session is described:

From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.

On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…

Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?

 As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.

It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?

In line with my recent essay “Four political futures: which will you choose?”, I’ll be arguing for a technoprogressive approach to control, rather than a technolibertarian one.

Four futures

I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:

28 April 2015

Why just small fries? Why no big potatoes?

Filed under: innovation, politics, Transpolitica, vision — Tags: , , , , — David Wood @ 3:12 pm

Big potatoesLast night I joined a gathering known as “Big Potatoes”, for informal discussion over dinner at the De Santis restaurant in London’s Old Street.

The potatoes in question weren’t on the menu. They were the potential big innovations that politicians ought to be contemplating.

The Big Potatoes group has a tag-line: “The London Manifesto for Innovation”.

As their website states,

The London Manifesto for Innovation is a contribution to improving the climate for innovation globally.

The group first formed in the run-up to the previous UK general election (2010). I blogged about them at that time, here, when I listed the principles from their manifesto:

  • We should “think big” about the potential of innovation, since there’s a great deal that innovation can accomplish;
  • Rather than “small is beautiful” we should keep in mind the slogan “scale is beautiful”;
  • We should seek more than just a continuation of the “post-war legacy of innovation” – that’s only the start;
  • Breakthrough innovations are driven by new technology – so we should prioritise the enablement of new technology;
  • Innovation is hard work and an uphill struggle – so we need to give it our full support;
  • Innovation arises from pure scientific research as well as from applied research – both are needed;
  • Rather than seeking to avoid risk or even to manage risk, we have to be ready to confront risk;
  • Great innovation needs great leaders of innovation, to make it happen;
  • Instead of trusting regulations, we should be ready to trust people;
  • Markets, sticks, carrots and nudges are no substitute for what innovation itself can accomplish.

That was 2010. What has caused the group to re-form now, in 2015, is the question:

Why is so much of the campaigning for the 2015 election preoccupied with small fries, when it could – and should – be concentrating on big potatoes?

Last night’s gathering was facilitated by three of the writers of the 2010 big potato manifestoNico MacdonaldJames Woudhuysen, and Martyn Perks. The Chatham House rules that were in place prevents me from quoting directly from the participants. But the discussion stirred up plenty of thoughts in my own mind, which I’ll share now.

The biggest potato

FreemanDysonI share the view expressed by renowned physicist Freeman Dyson, in the book “Infinite in all directions” from his 1985 Gifford lectures:

Technology is… the mother of civilizations, of arts, and of sciences

Technology has given rise to enormous progress in civilization, arts and sciences over recent centuries. New technology is poised to have even bigger impacts on civilization in the next 10-20 years. So why aren’t politicians paying more attention to it?

MIT professor Andrew McAfee takes up the same theme, in an article published in October last year:

History teaches us that nothing changes the world like technology

McAfee spells out a “before” and “after” analysis. Here’s the “before”:

For thousands of years, until the middle of the 18th century, there were only glacial rates of population growth, economic expansion, and social development.

And the “after”:

Then an industrial revolution happened, centred around James Watt’s improved steam engine, and humanity’s trajectory bent sharply and permanently upward

AndrewMcAfeeOne further quote from McAfee’s article rams home the conclusion:

Great wars and empires, despots and democrats, the insights of science and the revelations of religion – none of them transformed lives and civilizations as much as a few practical inventions

Inventions ahead

In principle, many of the grave challenges facing society over the next ten years could be solved by “a few practical inventions”:

  • Students complain, with some justification, about the costs of attending university. But technology can enable better MOOCs – Massive Online Open Courses – that can deliver high quality lectures, removing significant parts of the ongoing costs of running universities; free access to such courses can do a lot to help everyone re-skill, as new occupational challenges arise
  • With one million people losing their lives to traffic accidents worldwide every year, mainly caused by human driver error, we should welcome the accelerated introduction of self-driving cars
  • Medical costs could be reduced by greater application of the principles of preventive maintenance (“a stitch in time saves nine”), particularly through rejuvenation biotechnology and healthier diets
  • A sustained green tech new deal should push society away from dependency on fuels that emit dangerous amounts of greenhouse gases, resulting in lifestyles that are positive for the environment as well as positive for humanity
  • The growing costs of governmental bureaucracy itself could be reduced by whole-heartedly embracing improved information technology and lean automation.

Society has already seen remarkable changes in the last 10-20 years as a result of rapid progress in fields such as electronics, computers, digitisation, and automation. In each case, the description “revolution” is appropriate. But even these revolutions pale in significance to the changes that will, potentially, arise in the next 10-20 years from extraordinary developments in healthcare, brain sciences, atomically precise manufacturing, 3D printing, distributed production of renewable energy, artificial intelligence, and improved knowledge management.

Indeed, the next 10-20 years look set to witness four profound convergences:

  • Between artificial intelligence and human intelligence – with next generation systems increasingly embodying so-called “deep learning”, “hybrid intelligence”, and even “artificial emotional intelligence”
  • Between machine and human – with smart technology evolving from “mobile” to “wearable” and then to “insideable”, and with the emergence of exoskeletons and other cyborg technology
  • Between software and biology – with programming moving from silicon (semiconductor) to carbon (DNA and beyond), with the expansion of synthetic biology, and with the application of genetic engineering
  • Between virtual and physical – with the prevalence of augmented reality vision systems, augmented reality education via new MOOCs (massive open online courses), cryptocurrencies that remove the need for centralised audit authorities, and lots more.

To take just one example: Wired UK has just reported a claim by Brad Perkins, chief medical offer at Human Longevity Inc., that

A “supercharged” approach to human genome research could see as many health breakthroughs made in the next decade as in the previous century

The “supercharging” involves taking advantage of four converging trends:

“I don’t have a pill” to boost human lifespan, Perkins admitted on stage at WIRED Health 2015. But he has perhaps the next best thing — data, and the means to make sense of it. Based in San Diego, Human Longevity is fixed on using genome data and analytics to develop new ways to fight age-related diseases.

Perkins says the opportunity for humanity — and Human Longevity — is the result of the convergence of four trends: the reduction in the cost of genome sequencing (from $100m per genome in 2000, to just over $1,000 in 2014); the vast improvement in computational power; the development of large-scale machine learning techniques; and the wider movement of health care systems towards ‘value-based’ models. Together these trends are making it easier than ever to analyse human genomes at scale.

Small fries

french-fries-525005_1280Whilst entrepreneurs and technologists are foreseeing comprehensive solutions to age-related diseases – as well as the rise of smart automation that could free almost every member of the society of the need to toil in employment that they dislike – what are politicians obsessing about?

Instead of the opportunities of tomorrow, politicians are caught up in the challenges of yesteryear and today. Like a short-sighted business management team obsessed by the next few quarterly financial results but losing sight of the longer term, these politicians are putting all their effort into policies for incremental changes to present-day metrics – metrics such as tax thresholds, the gross domestic product, policing levels, the degree of privatisation in the health service, and the rate of flow of migrants from Eastern Europe into the United Kingdom.

It’s like the restricted vision which car manufacturing pioneer Henry Ford is said to have complained about:

If I had asked people what they wanted, they would have said faster horses.

This is light years away from leadership. It’s no wonder that electors are deeply dissatisfied.

The role of politics

To be clear, I’m not asking for politicians to dictate to entrepreneurs and technologists which products they should be creating. That’s not the role of politicians.

However, politicians should be ensuring that the broad social environment provides as much support as possible to:

  • The speedy, reliable development of those technologies which have the potential to improve our lives so fully
  • The distribution of the benefits of these technologies to all members of society, in a way that preserves social cohesion without infringing individual liberties
  • Monitoring for risks of accidental outcomes from these technologies that would have disastrous unintended consequences.

PeterDruckerIn this way, politicians help to address the human angle to technology. It’s as stated by management guru Peter Drucker in his 1986 book “Technology, Management, and Society”:

We are becoming aware that the major questions regarding technology are not technical but human questions.

Indeed, as the Transpolitica manifesto emphasises:

The speed and direction of technological adoption can be strongly influenced by social and psychological factors, by legislation, by subsidies, and by the provision or restriction of public funding.

Political action can impact all these factors, either for better or for worse.

The manifesto goes on to set out its objectives:

Transpolitica wishes to engage with politicians of all parties to increase the likelihood of an attractive, equitable, sustainable, progressive future. The policies we recommend are designed:

  • To elevate the thinking of politicians and other leaders, away from being dominated by the raucous issues of the present, to addressing the larger possibilities of the near future
  • To draw attention to technological opportunities, map out attractive roads ahead, and address the obstacles which are preventing us from fulfilling our cosmic potential.

Specific big potatoes that are missing from the discussion

If our political leaders truly were attuned to the possibilities of disruptive technological change, here’s a selection of the topics I believe would find much greater prominence in political discussion:

  1. How to accelerate lower-cost high quality continuous access to educational material, such as MOOCs, that will prepare people for the radically different future that lies ahead
  2. How to accelerate the development of personal genome healthcare, stem cell therapies, rejuvenation biotech, and other regenerative medicine, in order to enable much healthier people with much lower ongoing healthcare costs
  3. How to ensure that a green tech new deal succeeds, rather than continues to fall short of expectations (as it has been doing for the last 5-6 years)
  4. How to identify and accelerate the new industries where the UK can be playing a leading role over the next 5-10 years
  5. How to construct a new social contract – perhaps involving universal basic income – in order to cope with the increased technological unemployment which is likely to arise from improved automation
  6. How society should be intelligently assessing any new existential risks that emerging technologies may unintentionally trigger
  7. How to transition the network of bodies that operate international governance to a new status that is fit for the growing challenges of the coming decades (rather than perpetuating the inertia from the times of their foundations)
  8. How technology can involve more people – and more wisdom and insight from more people – in the collective decision-making that passes for political processes
  9. How to create new goals for society that embody a much better understanding of human happiness, human potential, and human flourishing, rather than the narrow economic criteria that currently dominate decisions
  10. How to prepare everyone for the next leaps forward in human consciousness which will be enabled by explorations of both inner and outer space.

Why small fries?

But the biggest question of all isn’t anything I’ve just listed. It’s this:

  • Why are politicians still stuck in present-day small fries, rather than focusing on the big potatoes?

I’ll be interested in answers to that question from readers. In the meantime, here are my own initial thoughts:

  • The power of inertia – politicians, like the rest of us, tend to keep doing what they’re used to doing
  • Too few politicians have any deep personal insight (from their professional background) into the promise (and perils) of disruptive technology
  • The lack of a specific vision for how to make progress on these Big Potato questions
  • The lack of clamour from the electorate as a whole for answers on these Big Potato questions.

If this is true, we must expect it will take some time for public pressure to grow, leading politicians in due course to pay attention to these topics.

It will be like the growth in capability of any given exponential technology. At first, development takes a long time. It seems as if nothing much is changing. But finally, tipping points are reached. At that stage, it become imperative to act quickly. And at that stage, politicians (and their advisors) will be looking around urgently for ready-made solutions they can adapt from think tanks. So we should be ready.

18 March 2013

The future of the Mobile World Congress

Filed under: Accenture, Cambridge, Connectivity, innovation, Internet of Things, M2M, MWC — David Wood @ 3:37 am

How should the Mobile World Congress evolve? What does the future hold for this event?

MWC logoMWC (the Mobile World Congress) currently has good claims to be the world’s leading show for the mobile industry. From 25-28 February, 72 thousand attendees from over 200 countries made their way around eight huge halls where over 1,700 companies were showcasing their products or services. The Barcelona exhibition halls were heaving and jostling.

Tony Poulos, Market Strategist for TM Forum, caught much of the mood of the event in his review article, “Billions in big business as Barcelona beats blues”. Here’s an excerpt:

In one place for four days each year you can see, meet and hear almost every key player in the GSM mobile world. And there lies its secret. The glitz, the ritzy exhibits, the partially clad promo girls, the gimmicks, the giveaways are all inconsequential when you get down to the business of doing business. No longer do people turn up at events like MWC just to attend the conference sessions, walk the stands or attend the parties, they all come here to network in person and do business.

For suppliers, all their customers and prospects are in one place for one week. No need to send sales teams around the globe to meet with them, they come to you. And not just the managers and directors, there are more telco C-levels in Barcelona for MWC than are left behind in the office. For suppliers and operators alike, if you are not seen at MWC you are either out of business or out of a job.

Forget virtual social networking, this is good old-fashioned, physical networking at its best. Most meetings are arranged ahead of time and stands are changing slowly from gaudy temples pulling in passers-by to sophisticated business environments complete with comfortable meeting rooms, lounges, bars, espresso machines and delicacies including Swiss chocolates, Portuguese egg tarts, French pastries and wines from every corner of the globe…

But at least some of the 72,000 MWC attendees found the experience underwhelming. Kevin Coleman, CEO of Alliantus, offered a damning assessment at the end of the show:

I am wondering if I am the boy who shouts – “but the emperor is wearing no clothes” – or the masked magician about to reveal the secrets of the magic trick.

Here it is. “Most of you at Mobile World Congress have wasted your money.”

Yes, I have just returned from the MWC where I have seen this insanity with my own eyes…

That’s quite a discrepancy in opinion. Billions in business, or Insanity?

Or to rephrase the question in terms suggested by my Accenture colleague Rhian Pamphilon, Fiesta or Siesta?

To explore that question, Accenture sponsored a Cambridge Wireless event on Tuesday last week at the Møller Centre at Churchill College in Cambridge. The idea was to bring together a panel of mobile industry experts who would be prepared to share forthright but informed opinions on the highlights and lowlights of this year’s MWC.

Panellists

The event was entitled “Mobile World Congress: Fiesta or Siesta?!”. The panellists who kindly agreed to take part were:

  • Paul Ceely, Head of Network Strategy at EE
  • Raj Gawera, VP Marketing at Samsung Cambridge Mobile Solutions
  • Dr Tony Milbourn, VP Strategy at u-blox AG
  • Geoff Stead, Senior Director, Mobile Learning at Qualcomm
  • Professor William Webb, CTO at Neul
  • Dr. Richard Windsor, Founder of Radio Free Mobile.

The meeting was structured around three questions:

  1. The announcements at MWC that people judged to be the most significant – the news stories with the greatest implications
  2. The announcements at MWC that people judged to be the most underwhelming – the news stories with the least real content
  3. The announcements people might have expected at MWC but which failed to materialise – speaking volumes by their silence.

In short, what were the candidates for what we termed the Fiesta, the Siesta, and the Niesta of the event? Which trends should be picked out as the most exciting, the most snooze-worthy, and as sleeping giants liable to burst forth into new spurts of activity? And along the way, what future could we discern, not just for individual mobile trends, but for the MWC event itself?

I had the pleasure to chair the discussion. All panellists were speaking on their own behalf, rather than necessarily representing the corporate viewpoints of their companies. That helped to encourage a candid exchange of views. The meeting also found time to hear suggestions from the audience – which numbered around 100 members of the extended Cambridge Wireless community. Finally, there was a lively networking period, in which many of the audience good-humouredly button-holed me with additional views.

We were far from reaching any unanimous conclusion. Items that were picked as “Fiesta” by one panellist sometimes featured instead on the “Siesta” list of another. But I list below some key perceptions that commanded reasonable assent on the evening.

Machine to machine, connected devices, and wearable computers

MWC showed a lot of promise for machine-to-machine (M2M) communications and for connected devices (devices that contain communications functionality but which are not phones). But more remains to be done, for this promise to reach its potential.

The GSMA Connected City gathered together a large number of individual demos, but the demos were mainly separated from each other, without there being a clear overall architecture incorporating them all.

Connected car was perhaps the field showing the greatest progress, but even there, practical questions remain – for example, should the car rely on its own connectivity, or instead rely on connectivity of smartphones brought into the car?

For MWC to retain its relevance, it needs to bring M2M and connected devices further to the forefront.

Quite likely, wearable computers will be showing greater prominence by this time next year – whether via head-mounted displays (such as Google Glass) or via the smart watches allegedly under development at several leading companies.

NFC – Near Field Communications

No one spoke up with any special excitement about NFC. Words used about it were “boring” and “complicated”.

Handset evolution

The trend towards larger screen sizes was evident. This seems to be driven by the industry as much as by users, since larger screens encourage greater amounts of data usage.

On the other hand, flexible screens, which have long been anticipated, and which might prompt significant innovation in device form factors, showed little presence at the show. This is an area to watch closely.

Perhaps the most innovative device on show was the dual display Yota Phone – with a standard LCD on one side, and an eInk display on the other. As can be seen in this video from Ben Wood of CCS Insight, the eInk display remains active even if the device is switched off or runs out of battery.

Two other devices received special mention:

  • The Nokia Lumia 520, because of its low pricepoint
  • The Lenovo K900, because of what it showed about the capability of Intel’s mobile architecture.

Mobile operating systems

Panellists had dim views on some of the Android devices they saw. Some of these devices showed very little differentiation from each other. Indeed, some “formerly innovative” handset manufacturers seem to have lost their direction altogether.

Views were mixed on the likely impact of Mozilla’s Firefox OS. Is the user experience going to be sufficiently compelling for phones based on this OS to gain significant market traction? It seems too early to tell.

Panellists were more open to the idea that the marketplace could tolerate a considerable number of different mobile operating systems. Gone are  the days when CEOs of network operators would call for the industry to agree on just three platforms. The vast numbers of smartphones expected over the next few years (with one billion likely to be sold in 2013) mean there is room for quite a few second-tier platforms behind the market leaders iOS and Android.

Semiconductor suppliers

If the mobile operating system has two strong leaders, the choice of leading semiconductor supplier is even more limited. One company stands far out from the crowd: Qualcomm. In neither case is the rest of the industry happy with the small number of leading choices available.

For this reason, the recently introduced Tegra 4i processor from Nvidia was seen as potentially highly significant. This incorporates an LTE modem.

Centre of gravity of innovation

In past years, Europe could hold its head high as being at the vanguard of mobile innovation. Recent years have seen more innovation from America, e.g. from Silicon Valley. MWC this year also saw a lot of innovation from the Far East – especially Korea and China. Some audience members suggested they would be more interested in attending an MWC located in the Far East than in Barcelona.

Could the decline in Europe’s position be linked to regulatory framework issues? It had been striking to listen to the pleas during keynotes from CEOs of European network operators, requesting more understanding from governments and regulators. Perhaps some consolidation needs to take place, to address the fragmentation among different network operators. This view was supported by the observation that a lot of the attempted differentiation between different operators – for example, in the vertical industry solutions they offer – fail to achieve any meaningful distinctions.

State of maturity of the industry

In one way, the lack of tremendous excitement at MWC this year indicates the status of the mobile industry as being relatively mature. This is in line with the observation that there were “a lot of suits” at the event. Arguably, the industry is ripe for another round of major disruption – similar to that triggered by Apple’s introduction of the iPhone.

Unsurprisingly, given the setting of the Fiesta or Siesta meeting, many in the audience hold the view that “the next big mobile innovation” could well involve companies with strong footholds in Cambridge.

Moller Centre

Footnote: Everything will be connected

Some of the same themes from the Fiesta or Siesta discussion will doubtless re-appear in “The 5th Future of Wireless International Conference” being run by Cambridge Wireless at the same venue, the Møller Centre, on 1st and 2nd of July this year. Registration is already open. To quote from the event website:

Everything Will Be Connected (Did you really say 50 billion devices?)

Staggeringly, just 30 years since the launch of digital cellular, over 6 billion people now have a mobile phone. Yet we may be on the threshold of a far bigger global shift in humanity’s use and application of wireless and communications. It’s now possible to connect large numbers of physical objects to the Internet and Cloud and give each of them an online digital representation. What really happens when every ‘thing’ is connected to the Cloud and by implication to everything else; when computers know where everything is and can enhance our perception and understanding of our surroundings? How will we interact with this augmented physical world in the future, and what impact will this have on services, infrastructure and devices? More profoundly, how might this change our society, business and personal lives?

In 2013, The Future of Wireless International Conference explores strategic questions about this “Internet of Things”. How transformational could it be and how do we distinguish reality from hyperbole? What about the societal, business and technical challenges involved in moving to a future world where everyday objects are connected and autonomous? What are the benefits and pitfalls – will this be utopia or dystopia? What is the likely impact on your business and what new opportunities will this create? Is your business strategy correct, are you too early, or do you risk being too late? Will this change your business, your life? – almost certainly. Come to hear informed analysis, gain insight, and establish new business connections at this un-missable event.

The agenda for this conference is already well-developed – with a large number of highlights all the way through. I’ll restrict myself to mentioning just two of them. The opening session is described as an executive briefing “What is the Internet Of Things and Why Should I Care?”, and features a keynote “A Vision of the Connected World” by Prof Christopher M. Bishop, FREng, FRSE, Distinguished Scientist, Microsoft Research. The closing session is a debate on the motion “This house believes that mobile network operators will not be winners in the Internet of Things”, between

1 May 2010

Costs of complexity: in healthcare, and in the mobile industry

Filed under: books, business model, disruption, healthcare, innovation, modularity, simplicity — David Wood @ 11:56 am

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

That sentence comes from page 92 of “The Innovator’s Prescription: A disruptive solution for health care“, co-authored by Clayton Christensen, Jerome Grossman, and Jason Hwang.  Like all the books authored (or co-authored) by Christensen, the book is full of implications for fields outside the particularly industry being discussed.

In the case of this book, the subject matter is critically important in its own right: how can we find ways to allow technological breakthroughs to reduce the spiralling costs of healthcare?

In the book, the authors brilliantly extend and apply Christensen’s well-known ideas on disruptive change to the field of healthcare.  But the book should be recommended reading for anyone interested in either strategy or operational effectiveness in any hi-tech industry.  (It’s also recommended reading for anyone interested in the future of medicine – which probably includes all of us, since most of us can anticipate spending increasing amounts of time in hospitals or doctor’s surgeries as we become older.)

I’m still less than half way through reading this book, but the section I’ve just read seems to speak loudly to issues in the mobile industry, as well as to the healthcare industry.

It describes a manufacturing plant which was struggling with overhead costs.  At this plant, 6.2 dollars were spent in overhead expenses for every dollar spend on direct labour:

These overhead costs included not just utilities and depreciation, but the costs of scheduling, expediting, quality control, repair and rework, scrap maintenance, materials handling, accounting, computer systems, and so on.  Overhead comprised all costs that were not directly spent in making products.

The quality of products made at that plant was also causing concern:

About 15 percent of all overhead costs were created by the need to repair and rework products that failed in the field, or had been discovered by inspectors as faulty before shipment.

However, it didn’t appear to the manager that any money was being wasted:

The plant hadn’t been painted inside or out in 20 years.  The landscaping was now overrun by weeds.  The receptionist in the bare-bones lobby had been replaced long ago with a paper directory and a phone.  The manager had no secretarial assistance, and her gray World War II vintage steel desk was dented by a kick from some frustrated predecessor.

Nevertheless, this particular plant had considerably higher overhead burden rates than the other plants from the same company.  What was the difference?

The difference was in the complexity.  This particular plant was set up to cope with large numbers of different product designs, whereas the other plants (which had been created later) had been able to optimise for particular design families.

The original plant essentially had the value proposition,

We’ll make any product that anyone designs

In contrast, the newer plants had the following kind of value proposition:

If you need a product that can be made through one of these two sequences of operations and activities, we’ll do it for you at the lowest possible cost and the highest possible quality.

Further analysis, across a number of different plants, reached the following results:

Each time the scale of a plant doubled, holding the degree of pathway complexity constant, the overhead rate could be expected to fall by 15 percent.  So, for example, a plant that made two families and generated $40 million in sales would be expected to have an overhead burden ratio of about 2.85, while the burden rate for a plant making two families with $80 million in sales would be 15% lower (2.85 x 0.85 = 2.42).  But every time the number of families produced in a plant of a given scale doubled, the overhead burden rate soared 27 percent.  So if a two-pathway, $40 million plant accepted products that required two additional pathways, but that did not increase its sales volume, its overhead burden rate would increase by 2.85 x 1.27, to 3.62…

This is just one aspect of a long and fascinating analysis.  Modern day general purpose hospitals support huge numbers of different patient care pathways, so high overhead rates are inevitable.  The solution is to allow the formation of separate specialist units, where practitioners can then focus on iteratively optimising particular lines of healthcare.  We can already see this in firms that specialise in laser eye surgery, in hernia treatment, and so on.  Without these new units separating and removing some of the complexity of the original unit, it becomes harder and harder for innovation to take place.  The innovation becomes stifled under conflicting business models.  (I’m simplifying the argument here: please take a look at the book for the full picture.)

In short: reducing overhead costs isn’t just a matter of “eliminating obvious inefficiencies, spending less time on paperwork, etc”.  It often requires initially painful structural changes, in which overly complex multi-function units are simplified by the removal and separation of business lines and product pathways.  Only with the new, simplified set up – often involving new companies, and sometimes involving “creative destruction” – can disruptive innovations flourish.

Rising organisational complexity impacts the mobile industry too.  I’ve written about this before.  For example, in May last year I wrote an article “Platform strategy failure modes“:

The first failure mode is when a device manufacturer fails to have a strategy towards mobile software platforms.  In this case, the adage holds true that a failure to strategise is a strategy to fail.  A device manufacturer that simply “follows the wind” – picking platform P1 for device D1 because customer C1 expressed a preference for P1, picking platform P2 for device D2 because customer C2 expressed a preference for P2, etc – is going to find that the effort of interacting successfully with all these different platforms far exceeds their expectations.  Mobile software platforms require substantial investment from manufacturers, before the manufacturer can reap commercial rewards from these platforms.  (Getting a device ready to demo is one thing.  That can be relatively easy.  Getting a device approved to ship onto real networks – a device that is sufficiently differentiated to stand out from a crowd of lookalike devices – can take a lot longer.)

The second failure mode is similar to the first one.  It’s when a device manufacturer spreads itself  too thinly across multiple platforms.  In the previous case, the manufacturer ended up working with multiple platforms, without consciously planning that outcome.  In this case, the manufacturer knows what they are doing.  They reason to themselves as follows:

  • We are a highly competent company;
  • We can manage to work with (say) three significant mobile software platforms;
  • Other companies couldn’t cope with this diversification, but we are different.

But the outcome is the same as the previous case, even though different thinking gets the manufacturer into that predicament.  The root failure is, again, a failure to appreciate the scale and complexity of mobile software platforms.  These platforms can deliver tremendous value, but require significant ongoing skill and investment to yield that kind of result.

The third failure mode is when a manufacturer seeks re-use across several different mobile software platforms.  The idea is that components (whether at the application or system level) are developed in a platform-agnostic way, so they can fit into each platform equally well.

To be clear, this is a fine goal.  Done right, there are big dividends.  But my observation is that this strategy is hard to get right.  The strategy typically involves some kind of additional “platform independent layer”, that isolates the software in the component from the particular programming interfaces of the underlying platform.  However, this additional layer often introduces its own complications…

Seeking clever economies of scale is commendable.  But there often comes time when growing scale is bedevilled by growing complexity.  It’s as mentioned at the beginning of this article:

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

Even more than a drive to scale, companies in the mobile space need a drive towards simplicity. That means organisational simplicity as well as product simplicity.

As I stated in my article “Simplicity, simplicity, simplicity“:

The inherent complexity of present-day smartphones risks all kinds of bad outcomes:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window;
  • Smartphone application development may become difficult, as developers need to juggle different programming interfaces and optimisation methods;
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.  The number one issue for the mobile industry, arguably, is to constantly find better ways to tame this complexity.

The companies that are successfully addressing the complexity issue seem, on the whole, to be the ones on the rise in the mobile space.

Footnote: It’s a big claim, but it may well be true that of all the books on the subject of innovation in the last 20 years, Clayton’s Christensen’s writings are the most consistently important.  The subtitle of his first book, “The innovator’s dilemma”, is a reminder why: “When new technologies cause great firms to fail“.

23 March 2010

The search for big political ideas

Filed under: democracy, Humanity Plus, innovation, politics, vision — David Wood @ 1:07 am

On Saturday, I attended an event called “The Battle for Politics“, organised by the Institute for Ideas as a “pre-election public summit”.

The publicity material for this event gave me reason to look forward to it:

Party politics no longer seems to be about clear ideological differences, or indeed any kind of substantial debate reflecting competing visions for a better society. Nonetheless, many pressing issues remain unresolved.

So though it might be tempting to write off mainstream politics as irrelevant, and to take a ‘none of the above’ position in the coming election, this can only feed the pervasive cynicism about the possibility of social change and progress. History has not gone on standby, but continues to throw up new challenges.

The Institute of Ideas wants to take the opportunity of this election to re-enfranchise the electorate and put each candidate on the spot by asking them to declare where they stand on a range of key questions.

And yes, there were some worthy discussions during the day:

  • The electorate seem still to be deeply interested in political matters, even though they are alienated from existing political parties and politicians;
  • Changing the way voting takes place might engender better discussion and buy-in from the electorate to the political process;
  • The ever growing costs of the welfare state – coupled with our current financial shortfalls – mean that some significant change is needed in how the welfare state operates;
  • Insights from social sciences (such as behavioural economics) possibly have at least some role to play in improving political governance;
  • Wider adoption of evidence-based policy – where appropriate – probably will also improve governance.

However, at the end of the day, I felt underwhelmed by what had taken place.

For example: at the event, the Institute of Ideas had launched their “21 pledges for progress 2010“.  This included the following gems:

  • Limit the police’s power to detain people without charge to 24 hours rather than 28 days, in the interests of civil liberties and due process.
  • Declare an amnesty for all illegal immigrants presently in the UK, whether asylum seekers or economic migrants, in the interests of recognising the positive aspirations of those who seek to improve their lives by moving countries.
  • Open the borders, revoking all immigration controls, in the interests of the free movement of citizens.
  • Get rid of police Tsars and unelected ‘experts’ from government decision-making in the interests of parliamentary sovereignty and democratic accountability.
  • Abolish the monarchy and the House of Lords in the interests of a fully elected legislature and executive.
  • Direct state funding of schools into providing universal access to the highest standard of education in academic subjects, rather than politicised cross curricular themes like sustainability or citizenship, in the interests of passing on real knowledge to our children.

I applaud the Institute of Ideas for catalysing debate on a series of important topics, but I saw little evidence of political ideas that are likely to deservedly capture the imagination and the enthusiasm of the electorate.

The material I liked best, from what was on display, was something entitled “The London Manifesto for Innovation”, created by a group called “The Big Potatoes“.  This made the following assertions:

  • We should “think big” about the potential of innovation, since there’s a great deal that innovation can accomplish;
  • Rather than “small is beautiful” we should keep in mind the slogan “scale is beautiful”;
  • We should seek more than just a continuation of the “post-war legacy of innovation” – that’s only the start;
  • Breakthrough innovations are driven by new technology – so we should prioritise the enablement of new technology;
  • Innovation is hard work and an uphill struggle – so we need to give it our full support;
  • Innovation arises from pure scientific research as well as from applied research – both are needed;
  • Rather than seeking to avoid risk or even to manage risk, we have to be ready to confront risk;
  • Great innovation needs great leaders of innovation, to make it happen;
  • Instead of trusting regulations, we should be ready to trust people;
  • Markets, sticks, carrots and nudges are no substitute for what innovation itself can accomplish.

I’d like to build on these insights, with some concrete suggestions.  These are suggestions for items that should become national priorities – items that deserve a larger amount of attention, analysis, resourcing, and funding.  Borrowing some of the “big potatoes” language, I see these items as potentially having major impact over the next 10-20 years.  As such, they deserve to be national priorities during the decade ahead.

I’m not sure exactly what belongs on this list of national priorities, and look forward to feedback.  But here’s an initial proposal:

  1. Preventive medicine – since the costs of prevention will in many cases dwarf the cost of cures;
  2. Anti-aging treatments – an important special case of the previous point;
  3. Better than well – just as there are many benefits to avoiding ill-health, there are many benefits to promoting super-health;
  4. Cognitive enhancement and intelligence augmentation – to help everyone to become smarter and more sociable (both individually and collectively);
  5. Artificial general intelligence – an important special case of the previous point;
  6. Improved rationality (overcoming biases, in all their forms) – another important special case of the same point;
  7. Freedom from fundamentalism – diminishing the hold of dogma, whether from “scripture” or “tradition” or “prophets”;
  8. Education about accelerating technology – so people become fully aware of the opportunities, risks, context, and options;
  9. Robotics supporting humans – providing unmatched strength, precision, and diligence;
  10. Nanotechnology – the use of atom-level engineering to create highly useful new materials, compounds, and tools;
  11. Synthetic biology – techniques of software and manufacturing applied to biology, with huge benefits for health;
  12. Largescale clean energy – whether solar, nuclear, or whatever;
  13. Patent system reform – to address aspects of intellectual property law where innovation and collaboration are being hindered rather than helped;
  14. Smart market regulation – to handle pressures where social forces lead to market failures rather than genuinely useful products;
  15. Expansion of voluntary enterprise (the domain of not-for-profit contribution) – since not everything good is driven by financial motivation;
  16. Expansion of human autonomy – supporting greater choice and experience – in both virtual and physical reality;
  17. New measures of human accomplishment – an attractive vision that supersedes economic measures such as GDP;
  18. Geo-engineering capability – to equip us with tools to wisely restructure the planet (and more).

To give this list a name: I tentatively call this list “The Humanity+ Agenda“.  I propose to say more about it at the Humanity+, UK2010 event in London’s Conway Hall on 24th April.

The list is driven by my beliefs that:

  • Humanity in the 21st century is facing both enormous challenges and enormous opportunities – “business as usual” is not sustainable;
  • Wise application of technology is the factor that will make the single biggest difference to successfully addressing these challenges and opportunities;
  • If we get things right, human experience in just a few decades time will be very substantially better than it is today – for all people, all over the world;
  • However, there’s nothing inevitable about any of this;
  • Getting things right will require us becoming smarter and more effective than ever before – but, thankfully, that is within our grasp;
  • This is worth shouting about!

Footnote: Some people say that big political ideas are dangerous, and that a focus on effective political management, pursuing pragmatic principles, is far preferable to ideology.  I sympathise with this viewpoint, and share an apprehension of ideology.  But provided rationality remains at the forefront, and provided people remain open to discussion and persuasion, I see great value in vision and focus.

2 March 2010

Major new challenges to receive X PRIZE backing

Filed under: catalysts, challenge, futurist, Genetic Engineering, Google, grants, innovation, medicine, space — David Wood @ 7:16 pm

The X PRIZE Foundation has an audacious vision.

On its website, it describes itself as follows:

The X PRIZE Foundation is an educational nonprofit organization whose mission is to create radical breakthroughs for the benefit of humanity thereby inspiring the formation of new industries, jobs and the revitalization of markets that are currently stuck

The foundation can point to the success of its initial prize, the Ansari X PRIZE.  This was a $10M prize to be awarded to the first non-government organization to launch a reusable manned spacecraft into space twice within two weeks.  This prize was announced in May 1996 and was won in October 2004, by the Tier One project using the experimental spaceplane SpaceShipOne.

Other announced prizes are driving research and development in a number of breakthrough areas:


The Archon X PRIZE will award $10 million to the first privately funded team to accurately sequence 100 human genomes in just 10 days.  Renowned physicist Stephen Hawking explains his support for this prize:

You may know that I am suffering from what is known as Amyotrophic Lateral Sclerosis (ALS), or Lou Gehrig’s Disease, which is thought to have a genetic component to its origin. It is for this reason that I am a supporter of the $10M Archon X PRIZE for Genomics to drive rapid human genome sequencing. This prize and the resulting technology can help bring about an era of personalized medicine. It is my sincere hope that the Archon X PRIZE for Genomics can help drive breakthroughs in diseases like ALS at the same time that future X PRIZEs for space travel help humanity to become a galactic species.

The Google Lunar X PRIZE is a $30 million competition for the first privately funded team to send a robot to the moon, travel 500 meters and transmit video, images and data back to the Earth.  Peter Diamandis, Chairman and CEO of the X PRIZE Foundation, provided some context in a recent Wall Street Journal article:

Government agencies have dominated space exploration for three decades. But in a new plan unveiled in President Barack Obama’s 2011 budget earlier this month, a new player has taken center stage: American capitalism and entrepreneurship. The plan lays the foundation for the future Google, Cisco and Apple of space to be born, drive job creation and open the cosmos for the rest of us.

Two fundamental realities now exist that will drive space exploration forward. First, private capital is seeing space as a good investment, willing to fund individuals who are passionate about exploring space, for adventure as well as profit. What was once affordable only by nations can now be lucrative, public-private partnerships.

Second, companies and investors are realizing that everything we hold of value—metals, minerals, energy and real estate—are in near-infinite quantities in space. As space transportation and operations become more affordable, what was once seen as a wasteland will become the next gold rush. Alaska serves as an excellent analogy. Once thought of as “Seward’s Folly” (Secretary of State William Seward was criticized for overpaying the sum of $7.2 million to the Russians for the territory in 1867), Alaska has since become a billion-dollar economy.

The same will hold true for space. For example, there are millions of asteroids of different sizes and composition flying throughout space. One category, known as S-type, is composed of iron, magnesium silicates and a variety of other metals, including cobalt and platinum. An average half-kilometer S-type asteroid is worth more than $20 trillion.

Technology is reaching a critical point. Moore’s Law has given us exponential growth in computing technology, which has led to exponential growth in nearly every other technological industry. Breakthroughs in rocket propulsion will allow us to go farther, faster and more safely into space…

The Progressive Automotive X PRIZE seeks “to inspire a new generation of viable, safe, affordable and super fuel efficient vehicles that people want to buy“.  $10 million in prizes will be awarded in September 2010 to the teams that win a rigorous stage competition for clean, production-capable vehicles that exceed 100 MPG energy equivalent (MPGe).  Over 40 teams from 11 countries are currently entered in the competition.

Forthcoming new X PRIZEs

The best may still be to come.

It now appears that a series of new X PRIZEs are about to be announced.  CNET News writer Daniel Terdiman reports a fascinating interview with Peter Diamandis, in his article “X Prize group sets sights on next challenges (Q&A)“.

The article is well worth reading in its entirety.  Here are just a few highlights:

On May 15, at a gala fundraising event to be held at George Lucas’ Letterman Digital Arts Center in San Francisco, X Prize Foundation Chairman and CEO Peter Diamandis, along with Google founders Larry Page and Sergey Brin, and “Avatar” director James Cameron, will unveil their five-year vision for the famous awards…

The foundation …  is focusing on several potential new prizes that could change the world of medicine, oceanic exploration, and human transport.

The first is the so-called AI Physician X Prize, which will go to a team that designs an artificial intelligence system capable of providing a diagnosis equal to or better than 10 board-certified doctors.

The second is the Autonomous Automobile X Prize, which will go to the first team to design a car that can beat a top-seeded driver in a Gran Prix race.

The third would go to a team that can generate an organ from a terminal patient’s stem cells, transplant the organ [a lung, liver, or heart] into the patient, and have them live for a year.

And the fourth would reward a team that can design a deep-sea submersible capable of allowing scientists to gather complex data on the ocean floor

Diamandis  explains the potential outcome of the AI Physician Prize:

The implications of that are that by the end of 2013, 80 percent of the world’s populace will have a cell phone, and anyone with a cell phone can call this AI and the AI can speak Mandarin, Spanish, Swahili, any language, and anyone with a cell phone then has medical advice at the level of a board certified doctor, and it’s a game change.

Even more new X PRIZEs

Details of the process of developing new X PRIZEs are described on the foundation’s website.  New X PRIZEs are are guided by the following principles:

  • We create prizes that result in innovation that makes a lasting impact. Although a technological breakthrough can meet this criterion, so do prizes which inspire teams to use existing technologies, knowledge or systems in more effective ways.
  • Prizes are designed to generate popular interest through the prize lifecycle: enrollment, competition, attempts (both successful and unsuccessful) and post-completion…
  • Prizes result in financial leverage. For a prize to be successful, it should generate outside investment from competitors at least 5-10 times the prize purse size. The greater the leverage, the better return on investment for our prize donors and partners.
  • Prizes incorporate both elements of technological innovation as well as successful “real world” deployment. An invention which is too costly or too inconvenient to deploy widely will not win a prize.
  • Prizes engage multidisciplinary innovators which would otherwise be unlikely to tackle the problems that the prize is designed to address.

The backing provided to the foundation by the Google founders and by James Cameron provides added momentum to what is already an inspirational initiative and a great catalyst for innovation.

24 February 2010

Grants available for online social entrepreneurs

Filed under: grants, innovation, sustainability — David Wood @ 10:44 am

Are you a UK-based online social entrepreneur?

That is – to quote from the website of UnLtdare you someone with vision, drive, commitment and passion, who wants to use the Internet to change the world for the better?

If so, you could be eligible for one of more than 80 grants which UnLtd plan to distribute this year, as part of the “Better Net Awards” programme being managed by UnLtd and funded by the Nominet Trust.

I recently met with Analia Lemmo from UnLtd, who explained to me how the programme works.

Throughout 2010, the programme will be making awards, at two levels:

  • Level 1 is for awards of between £500 and £5,000 (expected average of £2,000) to startups;
  • Level 2 is for awards of up to £15,000, for people who have already put their idea into practice, and who now want to expand it.

The key criteria for people to receive one of these awards is that:

  • You must have a project in mind to use the Internet for social impact;
  • Your project must be sustainable, that is, the grant should enable you to move the project to a level where it won’t need additional grants to keep it running;
  • Your project should be run from the UK, and should have an impact on a community of people in the UK.

The process to apply for a grant is explained on the UnLtd website:

  • UnLtd run regular information sessions, at various locations around the UK;
  • After taking part in one of these sessions, you should decide whether to proceed to fill in an application form;
  • Candidates may then be interviewed to check details of the proposal;
  • The final decision is made by a board of trustees.

Projects should fall within the following range of areas:

  • Digital inclusion – encouraging and assisting more people to acquire an online presence;
  • Education about the Internet;
  • Improving the environment;
  • Improving healthcare;
  • Online safety for children.

In a press release, Nominet explained their goals in providing this funding:

Teaming up with UnLtd allows Nominet Trust to source often hard to reach entrepreneurial individuals and community groups around the UK, and support their efforts to create, develop and implement Internet-based projects that benefit society.

UnLtd will provide hands-on support and resources alongside awards of funding to individuals and small groups who are creating new projects that reflect the objectives of Nominet Trust. The projects will focus on the safe use of the Internet for social benefit purposes such as education and inclusion. All awards in the partnership programme will be jointly approved by Nominet Trust and UnLtd.

Cliff Prior, chief executive at UnLtd, says: “UnLtd has a history of finding fantastic people with talent and a passion to transform the world in which they live, and supporting them to become successful social entrepreneurs – over 16,000 people to date. The Nominet Trust awards programme will enable UnLtd to build on this success by helping a new wave of people create social benefit through the Internet.”

Examples of previous winners of UnLtd awards are highlighted on the UnLtd website, and include:

  • Action for Sustainable Living, which supports people to live more sustainably in the context of their local community, so that local sustainability issues and priorities are tackled and resolved locally;
  • MOTIV – which works with primary and secondary schools to improve attendance and raise children’s aspirations;
  • The Big Green Idea – a charity dedicated to showing people how sustainable living can be easy, healthy, inexpensive and fun;
  • BabyGROE – the first free UK-wide magazine to inform new parents on ways to raise children which save money and protect the environment;
  • The Calma project – which provides support and training to individuals, families and carers who are affected by the challenging behaviour associated with autism, Asperger syndrome, learning difficulties, Attention Deficit Hyperactivity Disorder and related conditions.

In contrast to the general awards available from UnLtd, the Better Net Awards programme will have a special emphasis on Internet-based solutions.  If you think you could qualify – or if you think you could be a useful partner who can help UnLtd to identify potential award winners – then please follow the contact links on UnLtd’s site.

22 February 2010

Global hotspots for mobile innovation

Filed under: innovation, startups — David Wood @ 1:42 pm

In which parts of the world can we find the most innovative developments in mobile products?

One way to answer that question is to look at the 20 finalists of the recent “Mobile Premier Awards” event.  Entries to this contest came from all over the world, nominated by local Mobile Monday chapter organisations.  An international group of judges whittled down the list of local nominations to a group of 20 finalists.  Here’s the geographical breakdown of the final 20:

  • 1 from South America (Bogota)
  • 2 from North America (New York and Silicon Valley)
  • 2 from India (Chennai and New Delhi)
  • 1 from the Middle East (Tel Aviv)
  • 5 from Baltic countries (Copenhagen, Estonia, Lithuania, Oslo, and Stockholm)
  • 1 from Balkan countries (Slovenia)
  • 8 from Western Europe (Amsterdam, Austria, Barcelona, Berlin, Edinburgh, London, Milan, and Munich).

On the day of the final itself, out of these 20 companies, prizes were awarded to:

But another way to answer this question is to look at the view of the influential publication Fast Company.  They’ve just published a list of the world’s ten most innovative companies in the mobile industry.

What kind of geographical breakdown would you expect in Fast Company’s list?

90% of the list are companies headquartered in the USA:

  • Google, Apple, Amazon, Ford, Evernote, Qualcomm, Clearwire, Foursquare, Intermap.

Only one entry on the list is headquartered outside the USA:

  • HTC.

What should we make of this fact?  Here are three ways to think about it:

  1. Fast Company, immersed in activities in the US, is suffering from myopia (short-sightedness)
  2. It’s smart marketing by Fast Company.  As Matt Millar suggested: Fast Company is a US mag, read by US people. So tell them the US is great, they buy more 🙂
  3. The lion’s share of the greatest mobile innovation really is happening in the USA, and the rest of the world should wake up and recognise that fact.

If the third explanation is the right one, perhaps I should seek my next employment in the US.

I’m reminded of the marvellously thought-provoking picture produced a couple of years back by Rubicon Consulting, of how Internet companies view the mobile industry:

Footnote: Thanks to Petteri Muilu for drawing my attention to the Fast Company list.

21 January 2010

Selecting the most exciting mobile startups

Filed under: Barcelona, innovation, Mobile Monday, startups — David Wood @ 3:32 pm
  • Study the online details of each of 50 attractive mobile startup companies;
  • Identify, from this list, the 10 that are “the best of the best”.

That was the challenge posed to me earlier this week by Rudy de Waele, the Simon Cowell of the mobile industy.

As in previous years, the Monday of Mobile World Congress week – when the mobile industry congregates in Barcelona – will feature an Mobile Premier Awards event.  This event will feature a number of quickfire pitches by companies selected by Mobile Monday chapters worldwide.  These companies are competing for a number of awards, including the Mobile Premier Award in Innovation.

By this stage in the contest, there are 50 candidates.  Each has been selected as the result of a process in one of the Mobile Monday chapters.  We’re now at the stage of reducing this list to 20, to avoid the event in Barcelona stretching on too long in time.  Responsibility for this reduction falls to a group of people described as “an online jury of industry experts”.

I was honoured to be asked to take part in this jury, but at the same time I was apprehensive.  It’s a considerable responsibility to look at the information about each of 50 companies, and to find the most deserving 10 from that list.  (Each jury member picks 10.  The organisers aggregate the votes from all 25 jury members, and the top-scoring 20 companies are invited to make a pitch at the event in Barcelona.)

The guidelines to jury members asked that we evaluate each candidate based on:

  • originality, creativity and innovation;
  • technical and operational feasibility;
  • economic and financial viability.

I created a spreadsheet for my own use and started following links Rudy provided me to entries for each of the companies on dotopen.com.  In turn these entries pointed to other info, such as the companies’ own websites.

As I anticipated, the selection process was far from easy!  I quickly found 10 companies that I thought definitely deserved to attend Barcelona – and I had only searched about one third of the way through the list of nominees…

Occasionally I thought that a particular entry looked comparatively uninteresting (for example, that it was a “Me too” offering).  But when I clicked onto the company’s own website and started looking in more detail at what they had done, I would I think to myself “Mmm… this startup has a strong proposition after all”.

However, by close of play yesterday I had made my selection.  It’s inappropriate for me to publicly mention any companies at this time.  But I will say that I expect the event in Barcelona will give strong evidence of some companies executing well on some very interesting business ideas.

28 December 2009

Alternative perspectives on 21C technology innovation and life

Filed under: futurist, innovation — David Wood @ 6:26 pm

While browsing through Andrew Maynard’s stimulating blog “2020 science: Providing a clear perspective on developing science and technology responsibly“, I’ve run across a fascinating series of articles, authored by 10 different guest bloggers, entitled

Technology innovation, life, and the 21st century – ten alternative perspectives

Andrew introduces the series as follows:

Life in the 21st century is going to depend increasingly on technology innovation.  Governments believe this.  Industry believes this.  Scientists believe this.  But is technology innovation really the solution to every challenge facing humanity, or have we got hooked on an innovation habit so deeply we don’t even see the addiction?  And even if it is important – essential even – who decides which innovations are nurtured and how they are used?

I must confess I’m a staunch believer in the importance of technology innovation.  But I was reminded recently that not everyone sees the world in the same way, and that there are very different but equally valid perspectives on how science and technology should be used within society.

As a result, I decided to commission ten guest blogs on technology innovation from people working for, associated with or generally reflecting the views of Civic Society groups.  The aim was twofold – to expose readers to perspectives on technology innovation that are sometimes drowned out in mainstream conversations, and to give a sense of the breadth of opinions and perspectives that are often lumped under the banners of “civic society” or “Non Government Organizations”…

Before I pull out some comments from these ten guest blogs, let me declare my own bias.

Briefly, it is my view that

  • Humanity in the 21st century is facing both enormous challenges and enormous opportunities;
  • Wise application of technology is the factor that will make the single biggest difference to successfully addressing these challenges and opportunities;
  • If we get things right, human experience all over the world in just a few decades time will be very substantially better than it is today;
  • Technology can be accelerated by commercial factors, such as the operation of free markets, but these forces need review, supervision, and regulation, to increase the likelihood that the outcomes are truly beneficial;
  • Technology can also be accelerated by supporting, educating, encouraging, inpsiring, and enabling larger numbers of skilled engineers worldwide, to work in open and collaborative ways;
  • At the same time as we involve more people in the development of technology, we should be involving more people in informed open deep debates about the management of the development of technology.

In other words, I am a strong optimist about the joint potential for technology, markets, engineers, and open collaboration, but I deeply fear allowing any one of these factors to become overly dominant.

Any view, for example, that “markets are always right” or that “technology will always be beneficial”, is a dangerous simplification.  The application of technology will only be “wise” (to return to the word I used above) if the powerful engines of market economics and technology development are overseen and directed by suitable review bodies, acting on behalf of society as a whole.  The manner of operation of these review bodies, in turn, needs to be widely debated.  In these debates, there can be nothing sacrosanct or above question.

The essays in the guest series on the 2020 science blog make some good contributions to these debates.   I don’t agree with all the points made, but the points all deserve a hearing.

The series starts with an essay “Biopolitics for the 21st Century” written by Marcy Darnovsky, Associate Executive Director of the Center for Genetics and Society.  Here are some extracts:

One challenge we face … is a tendency toward over-enthusiasm about prospective technologies. Another is the entanglement of technology innovation and commercial dynamics. Neither of these is brand new.

Back in the last century, the 1933 Chicago World’s Fair took “technological innovation” as its theme and “A Century of Progress” as its formal name. Its official motto was “Science Finds, Industry Applies, Man Conforms.”The slogan shamelessly depicts “science” and “industry” as dictator – or at least drill sergeant – of humanity. It anoints industrial science as a rightful decision-maker about human ends, and an inevitable purveyor of societal uplift.

Today the 1933 World’s Fair slogan seems altogether crass. But have we earned our cringe? We’d like to think that we’re more realistic about science and technology innovations. We want to believe that, in some collective sense, we’re in control of their broad direction. But are we less giddy about the techno-future now than we were back then?  Does technology innovation now serve human needs rather than the imperatives of commerce? Have we devised social and cultural innovations for shaping new technologies – do we have robust democratic mechanisms that encourage citizens and communities to participate meaningfully in decisions about their development, use and regulation?

I’m afraid that the habits of exaggerating the benefits of new technologies and minimizing their unwanted down sides are with us still…

Technology innovation is increasingly dominated by large-scale commercial imperatives.  Over the past century, and ever more so since the 1980 Bayh-Dole Act (an attempt to spur innovation by allowing publicly funded researchers to profit from their work), innovators have become scientist-entrepreneurs, and universities something akin to corporate incubators.

Commercial dynamics have become particularly influential in the biosciences. It’s hard to imagine any scientist today responding as Jonas Salk did in 1955, when he said with a straight face that “the people” own the polio vaccine. “There is no patent,” he told legendary news broadcaster Edward R. Murrow. “Could you patent the sun?”

Of course, entrepreneurial activity in technology and science often delivers important benefits. It can bring new discoveries and techniques to fruition quickly, and make them available rapidly. Some recent commercial technologies, most notably in digital communication and computing, are stunning indeed.

But how far have we come from the slogan of the 1933 World’s Fair? Technology developers still routinely present their plans either as “inevitable” or as crucial for economic growth. As for the rest of us, we have few opportunities to deliberate – especially as citizens, but also as consumers – about the risks as well as the benefits of technology innovations. Twenty-first century societies and communities too often wind up conforming to new technologies rather than finding ways to shape their goals and direction…

Georgia Miller, who coordinates Friends of the Earth Australia’s Nanotechnology Project, wrote an essay “Beyond safety: some bigger questions about new technologies” for the series.  The essay includes the following:

The promise that a given new technology will deliver environmentally benign electricity too cheap to meter, end hunger and poverty, or cure disease is very seductive. That is why the claims are made with many emerging technologies – nuclear power, biotechnology and nanotechnology, to name a few.

However history shows that such optimistic predictions are never achieved in reality. In addition to benefits, new technologies come with social, economic and environmental costs, and sometimes significant political implications.

Still, when it comes to public communication or policy making about nanotechnology, we’re often presented with the limited notion of weighing up predicted ‘benefits’ versus ‘risks’…

This framing ignores the broader costs and transformative potential of new technologies. It suggests that if we can only make nanotechnology ‘safe’, its development will necessarily deliver wealth, health, social opportunities and even environmental gains.

Ensuring technology safety is clearly very important. But simply assuming that ‘safe’ technology will deliver nothing but benefits, and that these benefits will be available to everyone, is – to put it mildly – quite optimistic.

To evaluate whether or not new technologies will help or hinder efforts to address the great ecological and social challenges of our time, we need to dig a little deeper…

Our experience also teaches us that environmentally or socially promising technologies will not necessarily be adopted, especially if they challenge the status quo. The government of Australia, one of the sunniest countries on earth, has pledged billions of dollars to cushion the coal industry from the effects of a proposed carbon trading system, while offering scant support to the fledgling solar energy sector.

There is a tendency to focus on the potential of new technologies to address our most pressing problems, rather than to seek better deployment of existing technologies, better design of existing systems, or changes in production and consumption. This reflects a preference to avoid systemic change. It also reflects an unfounded optimism that the ‘solution’ lies just over the horizon.

But sometimes ensuring better deployment of existing technologies is the most effective way to deal with a problem. Just as wider accessibility of existing drugs and medical treatments could prevent a huge number of deaths world-wide, improving urban storm water harvesting and re-use, housing insulation and mass transit public transport could go a long way to reducing our ecological footprint – potentially at a lower cost and at lower risk than mooted high tech options.

If evaluating the implementation or performance failures of previous technologies reveals economic or social obstacles or constraints, it’s probably these factors that warrant our attention. There is no reason to believe they will magically disappear once new technologies arrive…

Geoff Tansey of the Food Ethics Council weighs into the debate with his essay, “Innovation for a well-fed world – what role for technology?

Andrew [Maynard] posed the question, “How should technology innovation contribute to life in the 21st century?”

For me, working on creating a well-fed world, the short answer is: in a way that supports a diverse, fair and sustainable food system in which everyone, everywhere can eat a healthy safe, culturally appropriate diet. For that to happen, we need a change of direction in which the key innovations needed are social, economic and political, not technological. And the question is:  what kind of technology, developed by whom, for whom, will help; who has what power to decide on what to do and to control it, who carries the risks and gets the benefits.

Take the debate on GM technology, for example. We in the Food Ethics Council … argue that instead of asking, ‘how can GM technology help secure global food supplies’, we need to ask ‘what can be done – by scientists but also by others – to help the world’s hungry?’…

Remember, too, that you do not have to have a correct scientific understanding of something to develop technologies that work, but sometimes we need a revolution in the history of science to conceive of new ways of engineering things – from Einstein’s insight that matter could be converted to energy, and Watson and Crick’s discovery of DNA and our understanding that life – and information – is digital and can be manipulated and re-engineered as such. That leads to new technological possibilities, as does nano-tech and synthetic biology – but all new technologies are generally over-hyped and invariably have unintended consequences. Indeed, global warming is the unintended consequence of a fossil- fuel driven industrial revolution…

In her essay, “Stop and Think: A Luddite Perspective“, Jennifer Sass, Senior Scientist at the Natural Resources Defense Council, makes some pained comments about technology and progress, before raising some specific concerns about nanotechnology:

Is there a role for technology in progressive social movements? Sure.

It wasn’t until the mechanization of cotton harvesting in the 1980’s that Missouri enacted compulsory education laws. New technology meant children were no longer needed in the field.

Lead wasn’t forced out of auto fuel when it was shown to destroy kid’s brains (known by the 1920s). It was removed when it was found to destroy catalytic converters introduced in the mid-1970’s. Technology not only saved future generations from leaded gasoline, but it reduced other harmful pollution from auto exhaust.

Nano-scale chemicals, intentionally designed to take advantage of unique properties at the small scale, are already offering social benefits, but at what costs?

Traditional treatment of hazardous waste sites is predominantly done with technologies such as carbon adsorption, chemical precipitation, filtration, steam, or bioremediation. Nanoremediation (can you believe there is already a new word for this?) can mean treatment with nanoscale metal oxides, carbon nanotubes, enzymes, or the already popular nanoscale zero-valent iron. The advantage is that the nano particles are more chemically-reactive and so may be designed to be more effective with less material…

But, what happens to the nanoparticles in the treated groundwater once they’ve completed their intended task? Do they just go away? Poof?

Carbon nanotubes are 100 times stronger than steel and six times lighter. Research to weave them into protective clothing is already underway, although nothing is on the market yet. Wearing a nano-carbon vest could make our soldiers bullet-proof, stab-proof, and still be light-weight.

But, what happens when the nanotubes are freed from the material, such as during the manufacturing of the textiles, fabrication of the clothing, or when it is damaged or destroyed in an explosion? Breathable nanotubes can be like asbestos fibers, causing deadly lung diseases.

If nano-scale elements are used extensively in electronics and computers, does this mean that most of the hazardous exposures associated with manufacturing and end-of-life stripping will fall to workers in the global south, whereas most of the advantages of improved technology will be reaped by the global north?

I’m not against new technologies per se. In fact, as a scientist I favor innovation. I love cool new stuff. But, will it make jobs more hazardous? Will it contaminate the environment? Will it contribute to social and economic injustices by distributing the risks and benefits unequally?…

Richard Owen, Chair in Environmental Risk Assessment at the University of Westminster, and Co-ordinator of the UK Environmental Nanoscience Initiative, raises some dark worries in his essay “A new era of responsible innovation“:

In 1956 one of my favourite films hit the big screen: a classic piece of science fiction called Forbidden Planet. It tells the story of a mission in the 23rd century to a distant planet, to find out what has happened to an earlier scientific expedition. On arrival the crew encounter the sole survivors, Dr Morbius and his daughter: the rest of the expedition has mysteriously disappeared. Morbius lives in a world of dazzling technology, the like of which the crew have never seen.

He had discovered the remnants of a highly advanced civilisation, the Krell, and an astonishing machine they had developed, the Plastic Educator. This could radically enhance their intellect, allowing them to materialise any thought, to develop new and wondrous technologies. Morbius had done the same. But something terrible had happened to the Krell: not only did the Plastic Educator develop their intellect, it also unwittingly heightened the darker sides of their subconscious minds, ‘Monsters from the Id’. In one night of savage destruction they were taken over by their own dark forces, leaving their advanced society extinct.

Now I’m not going to tell you how it ends; you’ll have to watch the film yourself. And it would be fanciful to say that we are heading for the same fate as the Krell. But it is fair to say that our relationship with innovation can at times be troublesome, with consequences that can on occasion be global in nature.

You may have heard for example of a clever financial innovation called ’securitisation’: you may also know that this has helped leave a legacy of toxic debt that all of us will play a part in cleaning up. This is dwarfed by the legacy that our relationship with fossil fuel burning technology will leave not only for our children, but also for their grandchildren. These examples show that it is important that we innovate, to drive our economy, to improve our lifestyles and wellbeing, to find solutions to the big issues we face – but it is critical that we innovate responsibly. And public demands to be responsible, to avoid excessive risks, go beyond banks: they also apply to research.

In his inaugural speech in January Barack Obama called for a ‘new era of responsibility’. I want to know what this new era will look like. For a number of years I worked for a regulator, the Environment Agency. I discovered that regulation is an incredibly powerful tool to promote responsible innovation, and there is no doubt that it will continue to play an important role. Development of policies and regulation, for new technologies for example, tends to be ‘evidence based’ – that is evidence is acquired to make the case for amending or bringing in new legislation, and here the research councils play an important role.

I’m fascinated by how this process works. Take for example nanotechnology, which has been described as the first industrial revolution of the 21st century. It’s small stuff, but big business, taking advantage of the fact that materials at the nanoscale (a billionth of a metre) can have fundamentally different properties compared to other (perhaps larger) forms of the same material. So while carbon nanotubes resemble tiny rolled-up sheets of graphite, they behave very differently – indeed, they have been called ‘the hottest thing in physics’.

Nanotechnology has a projected market value of many billions of pounds, potentially providing important solutions for renewable energy, healthcare, for the environment. But if these nanomaterials behave so differently, do they present greater risks, to the environment or to human health1? If so, do they need to be regulated differently? How do we balance economic growth with preventing harm to people and the environment?…

I’m convinced there is a way to link innovation with responsibility more efficiently, to make it more anticipatory. And I’ve been struck by how willing and open the people I have worked with at NERC, EPSRC and ESRC have been to consider these approaches. Maybe there is a silver lining in the black cloud of the recent financial chaos; maybe we are learning that responsible innovation is sustainable innovation, that it’s a good thing, and that a commitment to it will help build resilient and responsible economies. Maybe Barack Obama was right, maybe we are about to enter a new era of responsibility. I hope so.

The final essay in the series is “21st Century Tech Governance? What would Ned Ludd do?“, by Jim Thomas of the ETC Group:

What if we could drag emerging technologies into a modern court of public deliberation and democratic oversight. What might that look like?

I’ve been turning over that question for about 15 years now while active in global debates on emerging technologies –  particularly GM Crops, Nanotechnology, Synthetic Biology and  Geo-engineering – debates in which I’ve encountered the term Luddite, meant as a slur, more times than I care to count. Language like this tumbles carelessly out of history .. but I find the parallels striking. Once again we are in the early phases of a new industrial revolution. Once again powerful technologies (Converging Technologies) are physically remaking and sometimes disintegrating our societies. Those  of us in civil society carrying out bit-part campaigns, issuing press releases and launching legal challenges are in a sense attempting to drag technology governance away from the darkness of narrow expert committees and into the sunny court of public deliberation for a broader hearing.. It seems a perfectly reasonable and democratic urge. But there’s got to be a better and more systematic way to do that?

So far I’ve found three sets of proposals that might begin to put technology oversight into the open and back in the hands of a wider public:

1.) Public Engagement: Citizens Juries, Knowledge exchanges, People’s Commissions…

2.) Global Oversight: ICENT.

ICENT stands for the International Convention for the Evaluation of New Technologies – a UN level body for foresighting emerging technology trends and then applying a wide-ranging assessment process that will consider the social, environmental and justice implications of the innovation being scrutinised. It doesn’t exist yet and maybe it never will but at ETC Group we have dedicated a lot of time to imagining what such a body could look like (we even have some nifty organagrams – see pg 36-40 of this) For example there would be bodies scanning the technological horizon and others making a rough reckoning of whether a new technology needed a strong oversight framework or not…

3.) Popular assessment : Technopedia?

The only governance and regulations that work are those where somebody is paying attention – so  rather than hide technology assessment in rarefied committees why not hand it to the wisdom of the crowds. Wikipedia may not be the most perfectly accurate source of all knowledge but it is comprehensive, up to date and flexible and provides an interesting model. Actually Wikipedia entries are often not a bad place to start if you want to suss out the societal and environmental issues raised by the zeitgeist regarding new technologies. How about a dedicated wiki site for collaborative monitoring and judging of emerging technologies? Such a site could be structured so that, unlike the halls of power, marginal voices have a space and are welcome…

It’s good to see this range of spirited and thoughtful contributions to the debate about the future of technological innovation.  Of course, this is just the tip of a very large iceberg of discussion, happening all over the Internet.  The really hard question, perhaps, is what is the optimal method and location for this debate?  Jim Thomas’ suggestion of a new wiki has some merit – provided it could become an authoritative and definitive wiki on emerging technologies, that rises above the vast crowd of many existing websites. Is there already such a wiki in existence?

Older Posts »

Blog at WordPress.com.