dw2

21 June 2016

5G World Futurist Summit

Filed under: disruption, Events, futurist — Tags: , , , , — David Wood @ 11:30 pm

Intro slide

On Wednesday next week, 29th June, it will be my pleasure to chair the Futurist Summit which is one of the free-to-attend streams happening as part of the 5G World event taking place at London’s Olympia.

You can read more about the summit here, and more about the 5G World event here.

The schedule for the summit is as follows:

11:00 Introduction to the Futurist Summit
David Wood – Chair, London Futurists & Principal, Delta Wisdom

11:30 Education 2022 – MOOCs in full use, augmented by AIs doing marking and assessment-setting
Julia Begbie – Deputy Director of Studies – KLC School of Design

12:00 Healthcare 2022 – Digital healthcare systems finally fulfilling the promise that has long been expected of them
A
vi Roy – Biomedical Scientist & Research Fellow at the Centre for Advancing Sustainable Medical Innovation (CASMI) – Oxford University

12:30 Finance 2022 – Anticipating a world without physical cash, and in many cases operating without centralised banks
Jeffrey Bower, Digital Finance Specialist, United Nations

13:00 Networking Lunch

14:00 Reinventing urban mobility for new business strategies…self-driving cars and beyond
Stephane Barbier – CEO – Transpolis

14:30 The Future of Smart Cities
Paul Copping – Smart City Advisor – Digital Greenwich, Royal Borough of Greenwich

15:00 The Future of Computer Security and ‘Cybercrime’
Craig Heath, Director, Franklin Heath 

15:30 What happens when virtual reality experiences become more engaging than those in the real world?”
Steve Dann, Founder & CEO, Amplified Robot 

16:00 End of Futurist Summit

Speakers slide

Each of the 30 minute slots in the Summit will include a presentation from the speaker followed by audience Q&A.

If you’re in or near London that day, I hope to see many of you at the Summit!

Note that, although the Futurist Summit is free to attend, you need to register in advance for a Free Expo Pass, via the 5G World conference registration page. You’ll probably see other streams at the event that you would also like to attend.

Stop press: Any members of London Futurists can obtain a 50% discount off the price of a full pass to 5G World – if you wish to attend other aspects of the event – by using the Priority Code Partner50 on the registration webpage.

 

 

10 June 2016

Lessons from Underground Airlines

In the grand sweep of history, how much difference can one person make?

For example, consider the influence of Abraham Lincoln, 16th President of the United States. What alternative history might have arisen if that great statesman had been felled by an assassin’s bullet, not (as in actual history) in 1865, after the conclusion of the American Civil War, but much earlier in his presidency?

That alternative scenario provides the backdrop to the speculative novel “Underground Airlines” by Ben H. Winter. It’s a novel that speculates, masterfully, about the trajectory of an alternative history.

Underground Airlines

Imagine if early martyrdom of Lincoln, before any civil war could start, had precipitated a colossal long-standing compromise in the United States, with northern anti-slavery states warily coexisting with southern pro-slavery states, not just for a few more years, but for long decades – indeed, right up until the present day. Imagine if the “underground railroad” rescue mechanism of safe houses and secret routes to transport fugitive escaped slaves, that existed in actual history from the 17th to the 19th century, persisted in modified, modernised form right up until the twenty first century, now known as “underground airlines” (the words which form the title of Winter’s book). Imagine if the latest features of modern society – such as GPS tracking and ubiquitous mobile computers – coexisted with industrial scale slavery in the “Hard Four” recalcitrant states of the deep south. And, worst of all, imagine an extension, right up till today, of the massive double think (self-deception) in which good people persuade themselves that the whole system is acceptable. Imagine the double think with which these bystanders view fugitive slaves on the run, as fair game to be hunted by trackers from the south acting on behalf of massive slave-holding conglomerates.

Winter’s book features double think writ large. Characters that, to outward appearances, seek to help runaway slaves, are secretly assisting the trackers, and allow themselves to feel comfortable with that double think. They accept the brute facts of slavery, and make peace (of a sort) with their personal accommodation to that worldview.

Personalities from actual history intrude, under the skilful choreography of the writer, into the alternative Underground Airlines history. Shunned by much of the rest of the industrialised world, the alternative America occupies a relative backwater on the global stage. The FDR and LBJ mentioned in quiet moments in the narrative wielded an impact far more local, in Underground Airlines history, than in actual history. A reference to a recent “gulf war” turns out to have nothing to do with the Middle East.

More than clever plotting

Winter’s book deserves praise for its clever plotting. Revelations of character motivations come as surprises, but not as jolts: the reader is gradually made aware of a bigger picture with its own, horrible logic. It adds up to gripping reading.

But more than that: Underground Airlines deserves praise for its astuteness in recognising that there was nothing inevitable about the abolition of slavery. The circumstances that we nowadays find overwhelmingly objectionable – the “Inhuman Bondage” described at length by real-world historian David Brion Davis in his epic account of the rise and fall of new world slavery – could be seen by otherwise admirable men and women as necessary, inevitable parts of a way of life that has many redeeming positive aspects. These apologists were wrapped in a set of perceptions – their “accepting slavery” paradigm – which prevented them from acknowledging the full awfulness of bound servitude. Despite their intelligence, their thinking was constrained. Despite the kindness that lay in their hearts, there were marked limits to their compassion.

Inhuman Bondage

I came across the work of David Brion Davis in the course of researching my own recently published book, The Abolition of Aging. Here’s an extract from near the end of my book:

The analysis by Davis makes it clear that:

  • The abolition of slavery was by no means inevitable or predetermined
  • There were strong arguments against the abolition of slavery – arguments raised by clever, devout people in both the United States and the United Kingdom – arguments concerning economic well-being, among many other factors
  • The arguments of the abolitionists were rooted in a conception of a better way of being a human – a way that avoided the harsh bondage and subjugation of the slave trade, and which would in due course enable many millions of people to fulfil a much greater potential
  • The cause of the abolition of slavery was significantly advanced by public activism – including pamphlets, lectures, petitions, and municipal meetings.

With its roots in the eighteenth century, and growing in momentum as the nineteenth century proceeded, the abolition of slavery eventually became an idea whose time had come – thanks to brave, smart, persistent activism by men and women with profound conviction.

With a different set of roots in the late twentieth century, and growing in momentum as the twenty-first century proceeds, the abolition of aging can, likewise, become an idea whose time has come. It’s an idea about an overwhelmingly better future for humanity – a future that will allow billions of people to fulfil a much greater potential. But as well as excellent engineering – the creation of reliable, accessible rejuvenation therapies – this project will also require brave, smart, persistent activism, to change the public landscape from one hostile (or apathetic) to rejuveneering into one that deeply supports it.

My claim in The Abolition of Aging is that most of us accept a terrible double think. We avidly support research against diseases such as cancer, dementia, and heart failure. We are aware of the destructive nature of all these diseases. But we shy away from research into the main underlying escalator of these diseases – the factor that makes these diseases more likely and (when they occur) more serious. This factor is biological aging – namely, the gradual deterioration of our molecular, cellular, and organic systems. We’re too ready to accept biological aging as a given.

We say it would be good if people could avoid being afflicted by cancer, dementia, or heart failure. We advocate people taking steps to decrease the chances of these diseases – for example, not to spend too much time under the direct sun, unprotected. But we tell ourselves that it’s somehow natural (and therefore somehow admirable) that biological aging accelerates in our bodies. So we acquiesce. We accept a deadly compromise.

The Abolition of Aging seeks to overturn that double think. It argues that rejuvenation is a noble, highly desirable, eminently practical destiny for our species – a “Humanity+” destiny that could, with sufficient focus and organisation, be achieved within just one human generation from now. Rejuvenation – the periodic reversal of the accumulation of significant damage at our molecular, cellular, and organic levels – can lead to a rapid decline in deaths from diseases of old age, such as cancer, dementia, heart failure, and lots more. Despite the implications of this change for our economic and social systems, this is an overwhelming good, which we should welcome wholeheartedly.

I’m happy to report that The Abolition of Aging has already featured as the #1 bestseller (in the UK) of the Gerontology section of Amazon.

Gerontology bestsellers UK

Next steps

Let’s return to the question from the start of this blogpost: In the grand sweep of history, how much difference can one person make?

We can’t all be Abraham Lincoln. But as I review in the final sections of my book, there’s a lot that each one of us can do, to tilt upwards the probability that successful rejuvenation therapies will be widely available by 2040. This includes steps to:

  1. Strengthen communities that are working on at least parts of the rejuveneering project
  2. Improve our personal understanding of aspects of rejuveneering – the science, roadmaps, history, philosophy, theories, personalities, platforms, open questions, and so on – and help to document aspects of that better understanding, by creating or editing knowledgebases or wikis
  3. Become involved with marketing of one sort or another
  4. Undertake original research into any of the unknowns of rejuveneering; this could be part of formal educational courses, or it could be a commercial R&D undertaking; it could also be part of a decentralised activity, in the style of “citizen science”
  5. Provide funding to projects that we judge to be particularly worthwhile.

Our contributions are likely to be more significant when they connect into positive efforts that others are already making. For example, I’m impressed by the activities of the Major Mouse Testing Program (MMTP), which you can read about here. I’ve just made a contribution to their crowdfunding campaign, and I encourage you to consider doing the same.

25 May 2016

The Abolition of Aging – epublished

TAoA Cover page v11

I’m happy to report that my new book was epublished today, for Amazon Kindle. It’s “The Abolition of Aging: The forthcoming radical extension of healthy human longevity”.

You can find it on Amazon US, Amazon UK, …

It’s not a book about reprogramming our (silicon-based) devices – the kind of thing that used to be on my mind in my smartphone industry days. Instead, it’s about reprogramming our biology.

My reasons for writing this book are contained in its foreword. For convenience, I append a copy of the foreword at the end of this blogpost.

Physical copies of the book should be available from some time next month, for readers who prefer atoms to bits. I am planning to create an audio version too.

You can find more details about the book on its own website:

  • Advance praise, from people who have read pre-publication copies
  • The book’s description and dedication
  • An expanded table of contents
  • A community page, for further information about topics covered in the book.

If anyone has comments or queries about anything they read in the book, they’re welcome to raise them as responses to this blogpost.

Foreword

(This content is part of the introductory material of the book “The Abolition of Aging”.)

Within our collective grasp dwells the remarkable possibility of the abolition of biological aging.

It’s a big “if”, but if we decide as a species to make this project a priority, there’s around a 50% chance that practical rejuvenation therapies resulting in the comprehensive reversal of aging will be widely available as early as 2040.

People everywhere, on the application of these treatments, will, if they wish, stop becoming biologically older. Instead, again if they wish, they’ll start to become biologically younger, in both body and mind, as rejuvenation therapies take hold. In short, everyone will have the option to become ageless.

Two objections

The viewpoint I’ve just described is a position I’ve reached following extensive research, carried out over more than ten years. My research has led me to become a strong supporter of what can be called “the rejuveneering project”: a multi-decade cross-disciplinary endeavour to engineer human rejuvenation and thereby enable the choice to abolish aging.

But when I mention this viewpoint to people that I meet – as part of my activity as a futurist, or when I catch up with my former colleagues from the smartphone industry – I frequently encounter one of two adverse reactions.

First, people tell me that it’s not possible that such treatments are going to exist in any meaningful timescale any time soon. In other words, they insist that human rejuvenation can’t be done. It’s wishful thinking to suppose otherwise, they say. It’s bad science. It’s naively over-optimistic. It’s ignorant of the long history of failures in this field. The technical challenges remain overwhelmingly difficult.

Second, people tell me that any such treatments would be socially destructive and morally indefensible. In other words, they insist that human rejuvenation shouldn’t be done. It’s essentially a selfish idea, they say – an idea with all kinds of undesirable consequences for societal harmony or planetary well-being. It’s an arrogant idea, from immature minds. It’s an idea that deserves to be strangled.

Can’t be done; shouldn’t be done – in this book, I will argue that both these objections are profoundly wrong. I’ll argue instead that rejuvenation is a noble, highly desirable, eminently practical destiny for our species – a “Humanity+” destiny that could be achieved within just one human generation from now. As I see it, the abolition of aging is set to take its place on the upward arc of human social progress, echoing developments such as the abolition of slavery, the abolition of racism, and the abolition of poverty.

It turns out that the can’t/shouldn’t objections are interlinked. They reinforce each other. It’s often because someone thinks an effort is technically impossible that they object to any time or finance being applied to it. It would be much better, they say, to apply these resources to other philanthropic causes where real progress is possible. That, allegedly, would be the moral, mature thing to do. Conversely, when someone’s moral stance predisposes them to accept personal bodily decline and death, they become eager to find technical reasons that back up their decision. After all, it’s human nature to tend to cherry pick evidence that supports what we want to believe.

Two paradigms

A set of mutually reinforcing interlinked beliefs is sometimes called a “paradigm”. Our paradigms guide us, both consciously and unconsciously, in how we see the world, and in the kinds of projects we deem to be worthwhile. Our paradigms filter our perceptions and constrain our imaginations.

Changing paradigms is hard work. Just ask anyone who has tried to alter the opinion of others on contentious matters such as climate change, gun control, regulating the free market, or progressive taxation. Mere reason alone cannot unseat opinions on such topics. What to some observers is clear and compelling evidence for one position is hardly even noticed by someone entrenched in a competing paradigm. The inconvenient evidence is swatted away with little conscious thought.

The paradigm that accepts human bodily decline and aging as somehow desirable has even deeper roots than the vexatious political topics mentioned in the previous paragraph. It’s not going to be easy to dislodge that accepting-agingparadigm. However, in the chapters ahead, I will marshal a wide range of considerations in favour of a different paradigm – the paradigm that heartily anticipates and endorses rejuvenation. I’ll try to encourage readers to see things from that anticipating-rejuvenation paradigm.

Two abolitions

Accepting aging can be compared to accepting slavery.

For millennia, people from all social classes took slavery for granted. Thoughtful participants may have seen drawbacks with the system, but they assumed that there was no alternative to the basic fact of slavery. They could not conceive how society would function properly without slaves. Even the Bible takes slavery as a given. There is no Mosaic commandment which says “Thou shalt not keep slaves”. Nor is there anything in the New Testament that tells slave owners to set their slaves free.

But in recent times, thank goodness, the public mind changed. The accepting-slavery paradigm wilted in the face of a crescendo of opposing arguments. As with slavery, so also with aging: the time will come for its abolition. The public will cease to take aging for granted. They’ll stop believing in spurious justifications for its inevitability. They’ll demand better. They’ll see how rejuvenation is ready to be embraced.

One reason why slavery is so objectionable is the extent of its curtailment of human opportunity – the denial of free choice to the people enslaved. Another reason is that life expectancy of slaves frequently fell far short of the life expectancy of people not enslaved. As such, slavery can be counted as a major killer: it accelerated death.

From the anticipating-rejuvenation perspective, aging should be seen as the biggest killer of all. Compared to “standard” killers of the present day, such as drunken driving, terrorism, lead fumes, or other carcinogens – killers which rouse us to action to constrain them – aging destroys many more people. Globally, aging is the cause of at least two thirds of human deaths. Aging is the awful elephant in the room, which we have collectively learned to ignore, but which we must learn to recognise and challenge anew.

Every single week the rejuveneering project is delayed, hundreds of thousands more people suffer and die worldwide due to aging-related diseases. Advocates of rejuveneering see this ongoing situation as a needless slaughter. It’s an intolerable offence against human potential. We ought, therefore, to be powerfully motivated to raise the probability of 50% which I offered at the start of this foreword. A 50% chance of success with the rejuveneering project means, equally, a 50% chance of that project failing. That’s a 50% chance of the human slaughter continuing.

Motivation

In the same way as we have become fervently motivated in recent decades to deal with the other killers mentioned above – vigorously campaigning against, for example, drunk drivers and emitters of noxious chemical pollutants – we ought to be even more motivated to deal with aging. The anger that society has directed against tobacco companies, for long obscuring the links between smoking and lung cancer, ought to find a resonance in a new social passion to uncover and address links between biological aging and numerous diseases. If it’s right to seek to change behaviours and metabolism to cut down bad cholesterol (a precursor of heart disease) and concentrated glucose (a precursor of diabetes), it should be equally right to change behaviours and metabolism to cut down something that’s a precursor of even more diseases, namely, biological aging.

This is a discussion with enormous consequences. Changes in the public mood regarding the desirability of rejuveneering could trigger large reallocations of both public and private research expenditure. In turn, these reallocations are likely to have major implications in many areas of public well-being. Clearly, these decisions need to be taken wisely – with decisions being guided by a better understanding of the rich landscape of rejuveneering possibilities.

An ongoing surge of motivation, wisely coordinated, is one of the factors which can assist the rejuveneering project to overcome the weighty challenges it faces – challenges in science, technology, engineering, and human collaboration. Stubborn “unknown unknowns” surely lie ahead too. Due to these complexities and unknowns, no one can be sure of the outcome of this project. Despite what some rejuvenation enthusiasts may suggest, there’s nothing inevitable about the pace of future medical progress. That’s why I give the probability of success as only around 50%.

Although the end outcome remains unclear, the sense of discovery is increasing. The underlying scientific context is changing rapidly. Every day brings its own fresh firehose of news of potential breakthrough medical approaches. In the midst of so much innovation, it behoves us to seek clarity on the bigger picture.

To the extent that my book can provide that bigger picture, it will have met at least some of its goals. Armed with that bigger picture, readers of this book will, hopefully, be better placed to find the aspect of the overall rejuveneering project where they can make their best contributions. Together, we can tilt that 50% success probability upwards. The sooner, the better.

(If you found this interesting, you may like to read “The discussion ahead” next.)

 

25 October 2015

Getting better at anticipating the future

History is replete with failed predictions. Sometimes pundits predict too much change. Sometimes they predict too little. Frequently they predict the wrong kinds of change.

Even those forecasters who claim a good track record for themselves sometime turn out, on closer inspection, to have included lots of wiggle room in their predictions – lots of scope for creative reinterpretation of their earlier words.

Of course, forecasts are often made for purposes other than anticipating the events that will actually unfold. Forecasts can serve many other goals:

  • Raising the profile of the forecaster and potentially boosting book sales or keynote invites – especially if the forecast is memorable, and is delivered in a confident style
  • Changing the likelihood that an event predicted will occur – either making it more likely (if the prediction is enthusiastic), or making it less likely (if the prediction is fearful)
  • Helping businesses and organisations to think through some options for their future strategy, via “scenario analysis”.

Given these alternative reasons why forecasters make predictions, it perhaps becomes more understandable that little effort is made to evaluate the accuracy of past forecasts. As reported by Alex Mayyasi,

Organizations spend staggering amounts of time and money trying to predict the future, but no time or money measuring their accuracy or improving on their ability to do it.

This bizarre state of affairs may be understandable, but it’s highly irresponsible, none the less. We can, and should, do better. In a highly uncertain, volatile world, our collective future depends on improving our ability to anticipate forthcoming developments.

Philip Tetlock

Mayyasi was referring to research by Philip Tetlock, a professor at the University of Pennsylvania. Over three decades, Tetlock has accumulated huge amounts of evidence about forecasting. His most recent book, co-authored with journalist Dan Gardner, is a highly readable summary of his research.

The book is entitled “Superforecasting: The Art and Science of Prediction”. I wholeheartedly recommend it.

Superforecasting

The book carries an endorsement by Nobel laureate Daniel Kahneman:

A manual for thinking clearly in an uncertain world. Read it.

Having just finished this book, I echo the praise it has gathered. The book is grounded in the field of geopolitical forecasting, but its content ranges far beyond that starting point. For example, the book can be viewed as one of the best descriptions of the scientific method – with its elevation of systematic, thoughtful doubt, and its search for ways to reduce uncertainty and eliminate bias. The book also provides a handy summary of all kinds of recent findings about human thinking methods.

“Superforecasting” also covers the improvements in the field of medicine that followed from the adoption of evidence-based medicine (in the face, it should be remembered, of initial fierce hostility from the medical profession). Indeed, the book seeks to accelerate a similar evidence-based revolution in the fields of economic and political analysis. It even has hopes to reduce the level of hostility and rancour that tends to characterise political discussion.

As such, I see the book as making an important contribution to the creation of a better sort of politics.

Summary of “Superforecasting”

The book draws on:

  • Results from four years of online competitions for forecasters held under the Aggregative Contingent Estimation project of IARPA (Intelligence Advanced Research Projects Activity)
  • Reflections from contest participants whose persistently scored highly in the competition – people who became known as ‘superforecasters’
  • Insight from the Good Judgement Project co-created by Tetlock
  • Reviews of the accuracy of predictions made publicly by politicians, political analysts, and media figures
  • Other research into decision-making, cognitive biases, and group dynamics.

Forecasters and superforecasters from the Good Judgement Project submitted more than 10,000 predictions over four years in response to questions about the likelihood of specified outcomes happening within given timescales over the following 3-12 months. Forecasts addressed the fields of geopolitics and economics.

The book highlights the following characteristics as being the cause of the success of superforecasters:

  • Avoidance of taking an ideological approach, which restricts the set of information that the forecaster considers
  • Pursuit of an evidence-based approach
  • Willingness to search out potential sources of disconfirming evidence
  • Willingness to incrementally adjust forecasts in the light of new evidence
  • The ability to break down estimates into a series of constituent questions that can, individually, be more easily calculated
  • The desire to obtain several different perspectives on a question, which can then be combined into an aggregate viewpoint
  • Comfort with mathematical and probabilistic reasoning
  • Adoption of careful, precise language, rather than vague terms (such as “might”) whose apparent meaning can change with hindsight
  • Acceptance of contingency rather than ‘fate’ or ‘inevitability’ as being the factor responsible for outcomes
  • Avoidance of ‘groupthink’ in which undue respect among team members prevents sufficient consideration of alternative viewpoints
  • Willingness to learn from past forecasting experiences – including both successes and failures
  • A growth mindset, in which personal characteristics and skill are seen as capable of improvement, rather than being fixed.

(This section draws on material I’ve added to H+Pedia earlier today. See that article for some links to further reading.)

Human pictures

Throughout “Superforecasting”, the authors provide the human backgrounds of the forecasters whose results and methods feature in the book. The superforecasters have a wide variety of backgrounds and professional experience. What they have in common, however – and where they differ from the other contest participants, whose predictions were less stellar – is the set of characteristics given above.

The book also discusses a number of well-known forecasters, and dissects the causes of their forecasting failures. This includes 9/11, the wars in Iraq, the Cuban Bay of Pigs fiasco, and many more. There’s much to learn from all these examples.

Aside: Other ways to evaluate futurists

Australian futurist Ross Dawson has recently created a very different method to evaluate the success of futurists. As Ross explains at http://rossdawson.com/futurist-rankings/:

We have created this widget to provide a rough view of how influential futurists are on the web and social media. It is not intended to be rigorous but it provides a fun and interesting insight into the online influence of leading futurists.

The score is computed from the number of Twitter followers, the Alexa score of websites, and the general Klout metric.

The widget currently lists 152 futurists. I was happy to find my name at #53 on the list. If I finish writing the two books I have in mind to publish over the next 12 months, I expect my personal ranking to climb 🙂

Yet another approach is to take a look at http://future.meetup.com/, the listing (by size) of the Meetup groups around the world that list “futurism” (or similar) as one of their interests. London Futurists, which I’ve been running (directly and indirectly) over the last seven years, features in third place on that list.

Of course, we futurists vary in the kind of topics we are ready (and willing) to talk to audiences abound. In my own case, I wish to encourage audiences away from “slow-paced” futurism, towards serious consideration of the possibilities of radical changes happening within just a few decades. These changes include not just the ongoing transformation of nature, but the possible transformation of human nature. As such, I’m ready to introduce the topic of transhumanism, so that audiences become more aware of the arguments both for and against this philosophy.

Within that particular subgrouping of futurist meetups, London Futurists ranks as a clear #1, as can be seen from http://transhumanism.meetup.com/.

Footnote

Edge has published a series of videos of five “master-classes” taught by Philip Tetlock on the subject of superforecasting:

  1. Forecasting Tournaments: What We Discover When We Start Scoring Accuracy
  2. Tournaments: Prying Open Closed Minds in Unnecessarily Polarized Debates
  3. Counterfactual History: The Elusive Control Groups in Policy Debates
  4. Skillful Backward and Forward Reasoning in Time: Superforecasting Requires “Counterfactualizing”
  5. Condensing it All Into Four Big Problems and a Killer App Solution

I haven’t had the time to view them yet, but if they’re anything like as good as the book “Superforecasting”, they’ll be well worth watching.

10 October 2015

Technological unemployment – Why it’s different this time

On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:

  • To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
  • To what extent should humans be anxious about this process?

The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:

Channel4_HumansAdvertisingHoarding-440x293

“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.

Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.

I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.

The master algorithm

That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.

TheMasterAlgorithm

The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:

  • The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
  • The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.

For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.

As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:

  • “Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
  • “Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
  • “Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
  • “Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
  • “Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.

(See slide 6 of this Slideshare presentation. Indeed, take the time to view the full presentation. Better again, read Domingos’ entire book.)

What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.

And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.

Incidentally, Domingos has set out some ambitious goals for what his book will accomplish:

The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.

Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.

People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.

Rise of the robots

There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.

That’s one of the arguments made by perhaps the best writer so far on technological unemployment, Martin Ford. Ford’s recent book “Rise of the Robots: Technology and the Threat of a Jobless Future” builds ably on what previous writers have said.

RiseofRobots

Here’s a sample of review comments about Ford’s book:

Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument.
Los Angeles Times

If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent.
Financial Times, Summer books 2015, Business, Andrew Hill

[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right.
Daily Beast

Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines.
Fast Company

Well-researched and disturbingly persuasive.
Financial Times

Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future.
—Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think

Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read.
—Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes

If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.

I summarise the takeaway message in this picture, taken from one of my Delta Wisdom workshop presentations:

Tech unemployment curves

  • Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
  • However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
  • Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.

A worked example: a site carpenter

During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?

One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.

My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):

As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.

The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.

What does it mean to be human?

For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:

  • Should these changes be welcomed, rather than feared?
  • What will these forthcoming changes imply for our conception of what it means to be human?

To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.

If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.

Some of these topics were aired at a conference in New York City on 29th September: “The World Summit on Technological Unemployment”, that was run by Jim Clark’s World Technology Network.

Robotic Steel Workers

One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:

We are not facing a future without work. We are facing a future without jobs.

There is a huge difference between the two, and we must start seeing the difference, and making the difference more clear to each other.

In his blogpost “Jobs, Work, and Universal Basic Income”, Santens continues the argument as follows:

When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.

Put another way, extrinsically motivated work is work done before or after an expected payment. It’s an exchange. Intrinsically motivated work is work only made possible by sufficient access to money. It’s a gift.

The difference between these two forms of work cannot be overstated…

Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…

What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.

What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:

The RSA will be advocating support for Basic Income… in response to Technological Unemployment.

(This comment comes about 2/3 of the way through the podcast.)

To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.

Life is being redefined

Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.

Janna Anderson keynote

You can find Anderson’s article here. It starts as follows:

The Robot Takeover is Already Here

The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.

It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.

The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.

By way of summary, I’ll pull out a few sentences from the middle of the article:

One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…

It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…

[Another] thing is certain: Life is being redefined.

Who controls the robots?

Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.

That’s a theme I’ll be addressing next Sunday, 18th October, at a lunchtime session of the 2015 Battle of Ideas. The session is entitled “Man vs machine: Who controls the robots”.

robots

Here’s how the session is described:

From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.

On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…

Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?

 As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.

It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?

In line with my recent essay “Four political futures: which will you choose?”, I’ll be arguing for a technoprogressive approach to control, rather than a technolibertarian one.

Four futures

I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:

15 September 2015

A wiser journey to a better Tomorrowland

Peter Drucker quote

Three fine books that I’ve recently had the pleasure to finish reading all underscore, in their own ways, the profound insight expressed in 1970 by management consultant Peter Drucker:

The major questions regarding technology are not technical but human questions.

That insights sits alongside the observation that technology has been an immensely important driver of change in human history. The technologies of agriculture, steam, electricity, medicine, and information, to name only a few, have led to dramatic changes in the key metrics in human civilisation – metrics such as population, travel, consumption, and knowledge.

But the best results of technology typically depend upon changes happening in parallel in human practice. Indeed, new general purpose technology sometimes initially results, not in an increase of productivity, but in an apparent decline.

The productivity paradox

Writing in Forbes earlier this year, in an article about the “current productivity paradox in healthcare”, Roy Smythe makes the following points:

There were two previous slowdowns in productivity that were not anticipated, and caused great consternation – the adoption of electricity and the computer. The issues at hand with both were the protracted time it took to diffuse the technology, the problem of trying to utilize the new technology alongside the pre-existing technology, and the misconception that the new technology should be used in the same context as the older one.

Although the technology needed to electrify manufacturing was available in the early 1890s, it was not fully adopted for about thirty years. Many tried to use the technology alongside or in conjunction with steam-driven engines – creating all manner of work-flow challenges, and it took some time to understand that it was more efficient to use electrical wires and peripheral, smaller electrical motors (dynamos) than to connect centrally-located large dynamos to the drive shafts and pulleys necessary to disperse steam-generated power. The sum of these activities resulted in a significant, and unanticipated lag in productivity in industry between 1890 and 1920…

However, in time, these new GPTs (general purpose technologies) did result in major productivity gains:

The good news, however, is substantial. In the two decades following the adoption of both electricity and the computer, significant acceleration of productivity was enjoyed. The secret was in the ability to change the context (in the case of the dynamo, taking pulleys down for example) assisting in a complete overhaul of the business process and environment, and the spawning of the new processes, tools and adjuncts that capitalized on the GPT.

In other words, the new general purpose technologies yielded the best results, not when humans were trying to follow the same processes as before, but when new processes, organisational models, and culture were adopted. These changes took time to conceive and adopt. Indeed, the changes took not only time but wisdom.

Wachter Kotler Naam

The Digital Doctor

Robert Wachter’s excellent book “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” provides a dazzling analysis of the ways in which the computerisation of health records – creating so-called EHRs (Electronic Health Records) – is passing through a similar phase of disappointing accomplishment. EHRs are often associated with new kinds of errors, with additional workload burdens, and with interfering in the all-important human relationship between doctor and patient. They’re far from popular with healthcare professionals.

Wachter believes these problems to be temporary: EHRs will live up to their promise in due course. But only once people can set the hype aside. What’s needed is that designers of healthcare tech products and systems will:

  • Put a much higher priority on ease of use, simplifying usage patterns, and on redesigning the overall flow of activity
  • Recognise and deal with the multiple complexities of the world of medicine.

For a good flavour of Wachter’s viewpoint, consider this extract from a New York Times opinion article he wrote in March, “Why Health Care Tech Is Still So Bad”,

Last year, I saw an ad recruiting physicians to a Phoenix-area hospital. It promoted state-of-the-art operating rooms, dazzling radiology equipment and a lovely suburban location. But only one line was printed in bold: “No E.H.R.”

In today’s digital era, a modern hospital deemed the absence of an electronic medical record system to be a premier selling point.

That hospital is not alone…

I interviewed Boeing’s top cockpit designers, who wouldn’t dream of green-lighting a new plane until they had spent thousands of hours watching pilots in simulators and on test flights. This principle of user-centered design is part of aviation’s DNA, yet has been woefully lacking in health care software design.

Our iPhones and their digital brethren have made computerization look easy, which makes our experience with health care technology doubly disappointing. An important step is admitting that there is a problem, toning down the hype, and welcoming thoughtful criticism, rather than branding critics as Luddites.

In my research, I found humility in a surprising place: the headquarters of I.B.M.’s Watson team, the people who built the computer that trounced the “Jeopardy!” champions. I asked the lead engineer of Watson’s health team, Eric Brown, what the equivalent of the “Jeopardy!” victory would be in medicine. I expected him to describe some kind of holographic physician, like the doctor on “Star Trek Voyager,” with Watson serving as the cognitive engine. His answer, however, reflected his deep respect for the unique challenges of health care. “It’ll be when we have a technology that physicians suddenly can’t live without,” he said.

I’m reminded of a principle I included in a long-ago presentation, “Enabling simply great mobile phones” (PDF), from 2004:

It’s easy to make something hard;
It’s hard to make something easy…

Smartphones will sell very well provided they allow users to build on, and do more of, the things that caused users to buy phones in the first place (communication and messaging, fashion and fun, and safety and connection) – and provided they allow users to do these things simply, even though the phones themselves are increasingly complex.

As for smartphones, so also for healthcare technology: the interfaces need to protect users from the innumerable complications that lurk under the surface. The greater the underlying complexity, the greater the importance of smart interfaces.

Again as for smartphones, once good human interfaces have been put in place, the results of new healthcare technology can be enormous. The New York Times article by Wachter contains a reminder of vexed issues within healthcare – issues that technology has the power to solve:

Health care, our most information-intensive industry, is plagued by demonstrably spotty quality, millions of errors and backbreaking costs. We will never make fundamental improvements in our system without the thoughtful use of technology.

Tomorrowland

In a different way, Steven Kotler’s new book also brings human considerations to the forefront. The title of the book is “Tomorrowland: Our Journey from Science Fiction to Science Fact”. It’s full of remarkable human interest stories, that go far beyond simple cheer-leading for the potential of technological progress.

I had the pleasure to help introduce Steven at a recent event in Campus London, which was co-organised by London Futurists and FutureSelf. Steven appeared by Skype.

AtCampusLondon

(photos by Kirsten Zverina)

Ahead of the event, I had hoped to be able to finish reading his book, but because of other commitments I had only managed to read the first 25%. That was already enough to convince me that the book departed from any simple formula of techno-optimism.

In the days after the event, I was drawn back to Kotler’s book time and again, as I kept discovering new depth in its stories. Kotler brings a journalist perspective to the hopes, fears, struggles, and (yes) remarkable accomplishments of many technology pioneers. For most of these stories, the eventual outcome is still far from clear. Topics covered included:

  • The difficulties in trying to save the Florida Everglades from environmental collapse
  • Highlights from the long saga of people trying to invent flying cars (you can read that excerpt online here)
  • Difficulties and opportunities with different kinds of nuclear energy
  • The potential for technology to provide quick access to the profound feelings of transcendence reported from so-called “out of the body” and “near death experiences”
  • Some unexpected issues with the business of sperm donation
  • Different ways to enable blind people to see
  • Some missed turnings in the possibilities to use psychedelic drugs more widely
  • Options to prevent bio-terrorists from developing pathogens that are targeted at particular individuals.

There’s a video preview for the book:

The preview is a bit breathless for my liking, but the book as a whole provides some wonderfully rounded explorations. The marvellous potential of new technology should, indeed, inspire awe. But that potential won’t be attained without some very clear thinking.

Apex

The third of the disparate trio of three books I want to mention is, itself, the third in a continuous trilogy of fast-paced futurist fiction by Ramez Naam.

In “Apex: Connect”, Naam brings to a climactic culmination the myriad chains of human and transhuman drama that started in “Nexus: Install” and ratcheted in “Crux: Upgrade”.

RamezNaamTrilogy

Having been enthralled by the first two books in this trilogy, I was nervous about starting to listen to the third, since I realised it would likely absorb me for most of the next few days. I was right – but the absorption was worth it.

There’s plenty of technology in this trilogy, which is set several decades in the future: enhanced bodies, enhanced minds, enhanced communications, enhanced artificial intelligence. Critically, there is plenty of human  frailty too: people with cognitive biases, painful past experiences, unbalanced perspectives, undue loyalty to doubtful causes. Merely the fact of more powerful technology doesn’t automatically make people kinder as well as stronger, or wiser as well as smarter.

Another reason I like Apex so much is because it embraces radical uncertainty. Will superintelligence be a force that enhances humanity, or destroys it? Are regulations for new technology an instrument of oppression, or a means to guide people to more trustworthy outcomes? Should backdoors be built into security mechanisms? How should humanity treat artificial general intelligence, to avoid that AGI reaching unpleasant conclusions?

To my mind, too many commentators (in the real world) have pat answers to these questions. They’re too ready to assert that the facts of the matter are clear, and that the path to a better Tomorrowland is evident. But the drama that unfolds in Apex highlights rich ambiguities. These ambiguities require careful thought and wide appreciation. They also require human focus.

Postscript: H+Pedia

In between my other projects, I’m trying to assemble some of the best thinking on the pros and cons of key futurist questions. My idea is to use the new site H+Pedia for that purpose.

hpluspedia

As a starter, see the page on Transhumanism, where I’ve tried to assemble the most important lines of argument for and against taking a transhumanist stance towards the future. The page includes some common lines of criticism of transhumanism, and points out:

  • Where these criticisms miss the mark
  • Where these criticisms have substance – so that transhumanists ought to pay attention.

In some cases, I offer clear-cut conclusions. But in other cases, the balance of the argument is ambiguous. The future is far from being set in stone.

I’ll welcome constructive contributions to H+Pedia from anyone interested in the future of humanity.

Second postscript:

It’s now less than three weeks to the Anticipating 2040 event, where many speakers will be touching on the themes outlined above. Here’s a 90 second preview of what attendees can expect.

7 August 2015

Brave new world – bold new adaptation

Filed under: futurist, happiness, irrationality, theatre — Tags: , — David Wood @ 9:05 am

Q: What do the following cities have in common: Northampton, Edinburgh, Oxford, Nottingham, Cheltenham, Wolverhampton, Darlington, Blackpool, and Bradford?

A: They’re the locations which have theatres featuring in the forthcoming tour of a bold new production of Aldous Huxley’s Brave New World.

Brave-New-World

“Brave New World” is a phrase that frequently enters discussions about the future. Even people who have never read Huxley’s book – or people who have long forgotten the precise contents – recognise the phrase as a warning about the future misuse of technology. In Brave New World, people lead lives that are… comfortable, even blissful, but which lack authentic emotional experience. As a result, technology leads to a curtailment of human potential. Overall, humanity is diminished in that Brave New World, despite the gadgetry and convenience of that future society.

The version of Brave New World that’s about to go on tour has a script by Dawn King, is directed by James Dacre, features original music from These New Puritans, and is produced by Touring Consortium Theatre Company. The cast includes Sophie Ward, Abigail McKern, William Postlethwaite, Gruffudd Glyn, Olivia Morgan and Scott Karim.

bnw-castheadI found out about this forthcoming tour a couple of months ago, when I was asked to come to speak, as a futurist, to representatives of the different theatres which would be hosting the play. Could I provide some perspective on the play, and why it is particularly relevant today?

I took the chance to read the script, and was struck by its depth. There are many layers to it. And despite Huxley having written the novel as long ago as 1931, it has many highly contemporary themes. So I was happy to become involved.

The team at Touring Consortium Theatre Company filmed what I had to say. Here are some short extracts:

Are we nearer to a Brave New World than we think? The pace of change is accelerating.

Factors that will shape the next 10-20 years.

Technologies from Brave New World that are almost within our grasp

Some of the social changes we’ve seen that are eerily close to what Aldous Huxley predicted in 1931.

What questions does Brave New World pose for today’s society?

Note: for the touring schedule – from 4 Sept to 5 Dec 2015 – see this listing.

To read more about Brave New World from a transhumanist perspective, see this website by philosopher David Pearce.

30 June 2015

Securing software updates

Software frequently goes wrong. That’s a fact of life whose importance is growing – becoming, so to speak, a larger fact of life. That’s for three reasons:

  1. Complex software is spreading more widely into items where, previously, it was present (if at all) only in simpler form. This includes clothing (“wearable computing”), healthcare accessories, “connected home” consumer goods, automobiles (“connected vehicles”), and numerous “Internet of Things” sensors and actuators. More software means a greater likelihood of software error – and a greater likelihood of being hacked (compromised).
  2. Software in these items is increasingly networked together, so that defects in one piece of software can have effects that ricochet unexpectedly. For example, a hacked thermostat can end up reporting industrial secrets to eavesdroppers on the other side of the planet.
  3. By design, modern-day software is frequently open – meaning that its functionality can be configured and extended by other pieces of software that plug into it. Openness provides the possibility for positive innovation, in the way that apps enhance smartphones, or new themes enhance a webpage design. But that same openness enables negative innovation, in which plug-ins subvert the core product. This type of problem arises due to flaws in the set of permissions that expose software functionality from one module to another.

All three of these factors – the intrinsic defects in software, defects in its network connectivity, and defects in permission systems – can be exploited by writers of malware. Worryingly, there’s a mushrooming cybercrime industry that creates, modifies, and deploys increasingly sophisticated malware. There can be rich pickings in this industry. The denizens of Cybercrime Inc. can turn the principles of software and automation to their advantage, resulting in mass-scale deployment of their latest schemes for deception, intrusion, subterfuge, and extortion.

I recently raised these issues in my article “Eating the world: the growing importance of software security”. In that article, I predicted an imminent sea-change in the attitude which users tend to display towards the possibility of software security vulnerabilities. The attitude will change from complacency into purposeful alarm. Companies which are slow to respond to this change in attitude will find their products discarded by users – regardless of how many “cool” features they contain. Security is going to trump functionality, in a way it hasn’t done previously.

One company that has long been aware of this trend is Redbend (which was acquired by HARMAN in summer 2015). They’ve been thinking hard for more than a dozen years about the dynamics of OTA (over the air, i.e. wireless) software updates. Software updates are as much of a fact of life as software bugs – in fact, more so. Updates deliver fixes to bugs in previous versions; they also roll out new functionality. A good architecture for efficient, robust, secure software updates is, therefore, a key differentiator:

  • The efficiency of an update means that it happens quickly, with minimal data costs, and minimal time inconvenience to users
  • The robustness of an update means that, even if the update were to be interrupted partway through, the device will remain in a usable state
  • The security of an update means that it will reliably deliver software that is valid and authentic, rather than some “Trojan horse” malware masquerading as bona-fide.

According to my email archives, my first meeting with representatives of Redbend was as long ago as December 2002. At that time, I was Executive VP at Symbian with responsibility for Partnering. Since Redbend was one of the new “Platinum Partners” of Symbian, I took the time to learn more about their capabilities.

One person I met in these initial meetings was Gil Cordova, at that time Director of Strategic Marketing at Redbend. Gil wrote to me afterwards, confirming our common view as to what lay ahead in the future:

Redbend deals with an enabling technology and solution for OTA updating of mobile devices.

Our solution enables device manufacturers and operators to update any part of the device software including OS, middleware systems and applications.

The solution is based on our patented technology for creating delta-updates which minimize the update package size ensuring it can be cost-effectively sent and stored on the device with little bandwidth and memory consumption. In addition we enable the update to occur within the device memory constraints ensuring no cost-prohibitive memory needs to be added…

OTA updates can help answer the needs of remote software repair and fixing to the device software, as well as streamline logistics when deploying devices…

At that time, some dozen years ago, the idea that mobile phones would have more and more software in them was still relatively new – and was far from being widely accepted as a good thing. But Redbend and Symbian foresaw the consequences, as in the final paragraph of Gil’s email to me:

All the above points to the fact that if software is a new paradigm in the industry then OTA updating is a very crucial and strategic issue that must be taken into account.

OTA has, indeed, been an important issue since that time. But it’s my view that the full significance is only now becoming apparent. As security is poised to “eat the world”, efficient and reliable OTA capabilities will grow yet further in importance. It will be something that more and more companies will need to include at the heart of their own product offerings. The world will insist on it.

A few days ago, I took a closer look at recent news from HARMAN connected services – in particular at its architecture for cybersecurity. I saw a great deal that I liked:

Secure Car

  • Domain isolation – to provide a strict separation between different subsystems (e.g. parts of the overall software system on a car), with the subsystems potentially running different operating systems
  • Type-1 hypervisor – to isolate different subsystems from hardware resources, except when such access is explicitly designed
  • Driver virtualization – to allow additional peripherals (such as Wi-Fi, cameras, Bluetooth, and GPS) to be added quickly into an existing device with the same secure architecture
  • Software update systems – to enable separate remote software management for the head (dashboard) unit, telematics (black-box) unit, and numerous ECUs (engine control units) – with a 100% success record in deploying updates on more than one million vehicles
  • State of the art FIPS (Federal Information Processing Standard) encryption – applied to the entirety of the update process
  • Intrusion Detection and Prevention systems – to identify and report any malicious or erroneous network activity, and to handle the risks arising before the car or any of its components suffers any ill-effect.

I know from my own background in designing software systems that this kind of all-points-considered security cannot be tacked onto an existing system. Provision for it needs to be designed in from the beginning. That’s where Redbend’s long heritage in this space shows its value.

The full benefit of taking an architectural approach to secure software updates – as opposed to trying to fashion security on top of fundamentally insecure components – is that the same architecture is capable of re-use in different domains. It’s therefore no surprise that Redbend software management solutions are available, not only for connected cars, but also for wearable computers, connected homes, and machine-to-machine (M2M) devices.

Of course, despite all these precautions, I expect the security arms race to continue. Software will continue to have bugs, and the cybercrime industry will continue to find ingenious ways to exploit these bugs. The weakest part of any security system, indeed, is frequently the humans involved, who can fall victim to social engineering. In turn, providers of security software are seeking to improve the usability of their systems, to reduce both the likelihood and the impact of human operator error.

This race probably has many laps to run, with new surprises ahead on each lap. To keep ahead, we need allies and partners who constantly look ahead, straining to discern the forthcoming new battlegrounds, and to prepare new defences in sufficient time. But we also need to avail ourselves of the best present tools, so that our businesses have the best chance of avoiding being eaten in the meantime. Figuring out which security tools really are best in class is fast becoming a vital core competency for people in ever-growing numbers of industries.

Footnote: I was inspired to write this post after discussions with some industry colleagues involved in HARMAN’s Engineering a Connected Life program. The views and opinions expressed in this post are my own and don’t necessarily represent HARMAN’s positions, strategies or opinions.

11 June 2015

Eating the world – the growing importance of software security

Security is eating the world

In August 2011, Marc Andreessen famously remarked that “software is eating the world”. Writing in the Wall Street Journal, Andreessen set out his view that society was “in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy”.

With his background as pioneering web software architect at Netscape, and with a string of successful investments under his belt at venture capital firm Andreessen-Horowitz, Andreessen was well placed to comment on the potency of software. As he observed,

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defence. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures.

He then made the following prediction:

Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Industries to be impacted in this way, Andreessen suggested, would include entertainment, communications, recruitment, automotive, retail, energy, agriculture, finance, healthcare, education, and defence.

In the four years since the phrase was coined, “software is eating the world” has shown every sign of being a profound truth. In more and more sectors of industry, companies that lack deep expertise in software have found themselves increasingly by-passed by competitors. Software skills are no longer a “nice-to have” optional extra. They’re core to numerous aspects of product development.

But it’s time to propose a variant to the original phrase. A new set of deep skills are going to prove themselves as indispensable for ever larger numbers of industries. This time, the skills are in security. Before long, security will be eating the world. Companies whose software systems fall short on security will be driven out of business.

Dancing pigs

My claim about the growing importance of security may appear to fly in opposition to a general principle of user behaviour. This principle was described by renowned security writer Bruce Schneier in his 2000 book “Secrets and Lies”:

If J. Random Websurfer clicks on a button that promises dancing pigs on his computer monitor, and instead gets a hortatory message describing the potential dangers of the applet — he’s going to choose dancing pigs over computer security any day. If the computer prompts him with a warning screen like: “The applet DANCING PIGS could contain malicious code that might do permanent damage to your computer, steal your life’s savings, and impair your ability to have children,” he’ll click OK without even reading it. Thirty seconds later he won’t even remember that the warning screen even existed.

In other words, despite whatever users may say about the importance of security when directly asked about that question (“yes, of course I take security seriously”), in practice they put a higher priority on watching animated graphics (of flying pigs, cute kittens, celebrity wardrobe malfunctions, or whatever), and readily accept security risks in pursuit of that goal.

A review paper (PDF) published in 2009 by Cormac Herley of Microsoft Research shared findings that supported this view. Herley reports that, for example, users still typically choose the weakest passwords they can get away with, rather than making greater efforts to keep their passwords unguessable. Users also frequently ignore the advice against re-using the same passwords on different sites (so that, if there’s a security problem with any one of these sites, the user’s data on all other sites becomes vulnerable too).

Herley comments:

There are several ways of viewing this. A traditional view is that users are hopelessly lazy: in the face of dire descriptions of the threat landscape and repeated warnings, they do the minimum possible…

But by the end of his review, he offers a more sympathetic assessment:

“Given a choice between dancing pigs and security, users will pick dancing pigs every time.” While amusing, this is unfair: users are never offered security, either on its own or as an alternative to anything else. They are offered long, complex and growing sets of advice, mandates, policy updates and tips… We have shown that much of this advice does nothing to make users more secure, and some of it is harmful in its own right. Security is not something users are offered and turn down. What they are offered and do turn down is crushingly complex security advice that promises little and delivers less.

Herley’s paper concludes:

How can we help users avoid harm? This begins with a clear understanding of the actual harms they face, and a realistic understanding of their constraints. Without these we are proceeding blindly.

Exponential change

What are the “actual harms” that users face, as a result of insecure software systems or poor personal security habits?

We live in a time of rapid technology change. As software eats the world, it leaves more and more aspects of the world vulnerable to problems in the software – and vulnerable to problems in how that software is used, deployed, and updated.

As a result, the potential harm to users from poor security is constantly increasing. Users are vulnerable in new ways that they had never considered before.

Hacking embedded medical devices

For example, consider one possible unexpected side-effect of being fitted with one of the marvels of modern technology, an implantable heart pacemaker. Security researcher Barnaby Jack of IOActive gave a devastating demo at the Breakpoint conference in October 2012 of how easy it was for an outsider to interfere with the system whereby a pacemaker can be wirelessly recalibrated. The result is summed up in this Computerworld headline, “Pacemaker hack can deliver deadly 830-volt jolt”:

The flaw lies with the programming of the wireless transmitters used to give instructions to pacemakers and implantable cardioverter-defibrillators (ICDs), which detect irregular heart contractions and deliver an electric shock to avert a heart attack.

A successful attack using the flaw “could definitely result in fatalities,” said Jack…

In a video demonstration, Jack showed how he could remotely cause a pacemaker to suddenly deliver an 830-volt shock, which could be heard with a crisp audible pop.

Hacking vehicle control systems

Consider also the predicament that many car owners in Austin, Texas experienced, as a result of the actions of a disgruntled former employee of used car retail firm Texas Auto Center. As Wired reported,

More than 100 drivers in Austin, Texas found their cars disabled or the horns honking out of control, after an intruder ran amok in a web-based vehicle-immobilization system normally used to get the attention of consumers delinquent in their auto payments.

Police with Austin’s High Tech Crime Unit on Wednesday arrested 20-year-old Omar Ramos-Lopez, a former Texas Auto Center employee who was laid off last month, and allegedly sought revenge by bricking the cars sold from the dealership’s four Austin-area lots.

Texas Auto Center had included some innovative new technology in the cars they sold:

The dealership used a system called Webtech Plus as an alternative to repossessing vehicles that haven’t been paid for. Operated by Cleveland-based Pay Technologies, the system lets car dealers install a small black box under vehicle dashboards that responds to commands issued through a central website, and relayed over a wireless pager network. The dealer can disable a car’s ignition system, or trigger the horn to begin honking, as a reminder that a payment is due.

The beauty of the system is that it allows a greater number of customers to purchase cars, even when their credit history looks poor. Rather than extensive up-front tests of the credit-worthiness of a potential purchaser, the system takes advantage of the ability to immobilise a car if repayments should cease. However, as Wired reports,

Texas Auto Center began fielding complaints from baffled customers the last week in February, many of whom wound up missing work, calling tow trucks or disconnecting their batteries to stop the honking. The troubles stopped five days later, when Texas Auto Center reset the Webtech Plus passwords for all its employee accounts… Then police obtained access logs from Pay Technologies, and traced the saboteur’s IP address to Ramos-Lopez’s AT&T internet service, according to a police affidavit filed in the case.

Omar Ramos-Lopez had lost his position at Texas Auto Center the previous month. Following good security practice, his own account on the Webtech Plus system had been disabled. However, it seems he gained access by using an account assigned to a different employee.

At first, the intruder targeted vehicles by searching on the names of specific customers. Then he discovered he could pull up a database of all 1,100 Auto Center customers whose cars were equipped with the device. He started going down the list in alphabetical order, vandalizing the records, disabling the cars and setting off the horns.

His manager ruefully remarked, “Omar was pretty good with computers”.

Hacking thermostats and lightbulbs

Finally, consider a surprise side-effect of attaching a new thermostat to a building. Modern thermostats exchange data with increasingly sophisticated systems that control heating, ventilation, and air conditioning. In turn, these systems can connect into corporate networks, which contain email archives and other confidential documents.

The Washington Chamber of Commerce discovered in 2011 that a thermostat in a townhouse they used was surreptitiously communicating with an Internet address somewhere in China. All the careful precautions of the Chamber’s IT department, including supervision of the computers and memory sticks used by employees, to guard against the possibility of such data seepage, was undone by this unexpected security vulnerability in what seemed to be an ordinary household object. Information that leaked from the Chamber potentially included sensitive information about US policy for trade with China, as well as other key IP (Intellectual Property).

It’s not only thermostats that have much greater network connectivity these days. Toasters, washing machines, and even energy-efficient lightbulbs contain surprising amounts of software, as part of the implementation of the vision of “smart homes”. And in each case, it opens the potential for various forms of espionage and/or extortion. Former CIA Director David Petraeus openly rejoiced in that possibility, in remarks noted in a March 2012 Wired article “We’ll spy on you through your dishwasher”:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as RFID, sensor networks, tiny embedded servers, and energy harvesters — all connected to the next-generation internet using abundant, low-cost, and high-power computing…

Transformational is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft.

To summarise: smart healthcare, smart cars, and smart homes, all bring new vulnerabilities as well as new benefits. The same is true for other fields of exponentially improving technology, such as 3D printing, unmanned aerial vehicles (“drones”), smart toys, and household robots.

The rise of robots

Sadly, malfunctioning robots have already been involved in a number of tragic fatalities. In May 2009, an Oerlikon MK5 anti-aircraft system was part of the equipment used by 5,000 South African troops in a large-scale military training exercise. On that morning, the controlling software suffered what a subsequent enquiry would call a “glitch”. Writing in the Daily Mail, Gavin Knight recounted what happened:

The MK5 anti-aircraft system, with two huge 35mm cannons, is essentially a vast robotic weapon, controlled by a computer.

While it’s one thing when your laptop freezes up, it’s quite another when it is controlling an auto-loading magazine containing 500 high-explosive rounds…

“There was nowhere to hide,” one witness stated in a report. “The rogue gun began firing wildly, spraying high explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose.”

By the time the robot has emptied its magazine, nine soldiers lie dead. Another 14 are seriously injured.

Deaths due to accidents involving robots have also occurred throughout the United States. A New York Times article in June 2014 gives the figure of “at least 33 workplace deaths and injuries in the United States in the last 30 years.” For example, in a car factory in December 2001,

An employee was cleaning at the end of his shift and entered a robot’s unlocked cage. The robot grabbed his neck and pinned the employee under a wheel rim. He was asphyxiated.

And in an aluminium factory in February 1996,

Three workers were watching a robot pour molten aluminium when the pouring unexpectedly stopped. One of them left to flip a switch to start the pouring again. The other two were still standing near the pouring operation, and when the robot restarted, its 150-pound ladle pinned one of them against the wall. He was killed.

To be clear, in none of these cases is there any suggestion of foul play. But to the extent that robots can be remotely controlled, the possibility arises for industrial vandalism.

Indeed, one of the most infamous cases of industrial vandalism (if that is the right description in this case) is the way in which the Stuxnet computer worm targeted the operation of fast-spinning centrifuges inside the Iranian programme to enrich uranium. Stuxnet took advantage of at least four so-called “zero-day security vulnerabilities” in Microsoft Windows software – vulnerabilities that Microsoft did not know about, and for which no patches were available. When the worm found itself installed on computers with particular programmable logic controllers (PLCs), it initiated a complex set of monitoring and alteration of the performance of the equipment attached to the PLC. The end result was that the centrifuges tore themselves apart, reportedly setting back the Iranian nuclear programme by a number of years.

Chillingly, what Stuxnet could do to centrifuges, variant software configurations could have similar effects on other industrial infrastructure – including energy and communication grids.

Therefore, whereas there is much to celebrate about the growing connectivity of “the Internet of Things”, there is also much to fear about it.

The scariest book

Many of the examples I’ve briefly covered above – the hacking of embedded medical devices, vehicle control systems, and thermostats and lightbulbs – as well as the upsides and downsides of “the rise of robots” – are covered in greater detail in a book I recently finished reading. The book is “Future Crimes”, by former LAPD police officer Marc Goodman. Goodman has spent the last twenty years working on cyber security risks with organisations such as Interpol, NATO, and the United Nations.

The full title of Goodman’s book is worth savouring: “Future Crimes: Everything is connected, everything is vulnerable, and what we can do about it.” Singularity 1on1 podcast interview Nikola Danaylov recently described Future Crimes as “the scariest book I have ever read in my life”. That’s a sentiment I fully understand. The book has a panoply of “Oh my god” moments.

What the book covers is not only the exponentially growing set of vulnerabilities that our exponentially connected technology brings in its wake, but also the large set of people who may well be motivated to exploit these vulnerabilities. This includes home and overseas government departments, industrial competitors, disgruntled former employees, angry former friends and spouses, ideology-fuelled terrorists, suicidal depressives, and a large subset of big business known as “Crime Inc”. Criminals have regularly been among the very first to adopt new technology – and it will be the same with the exploitation of new generations of security vulnerabilities.

There’s much in Future Crimes that is genuinely frightening. It’s not alone in the valuable task of raising public awareness of increasing security vulnerabilities. I also recommend Kim Zetter’s fine investigative work “Countdown To Zero Day: Stuxnet and the launch of the world’s first digital weapon”. Some of the same examples appear in both books, providing added perspective. In both cases the message is clear – the threats from cybersecurity are likely to mushroom.

On the positive front, technology can devise countermeasures as well as malware. There has long been an arms race between software virus writers and software antivirus writers. This arms race is now expanding into many new areas.

If the race is lost, it means that security will eat the world in a bad way: the horror stories that are told throughout both Future Crimes and Countdown To Zero Day will magnify in both number and scope. In that future scenario, people will look back fondly on the present day as a kind of innocent paradise, in which computers and computer-based systems generally worked reliably (despite occasional glitches). Safe, clean computer technology might become as rare as bottled oxygen in an environment where smog and pollution dominates – something that is only available in small quantities, to the rich and powerful.

If the race is won, there will still be losers. I’m not just referring to Crime Inc, and other would-be exploiters of security vulnerabilities, whose ambitions will be thwarted. I’m referring to all the companies whose software will fall short of the security standards of the new market leaders. These are companies who pay lip service to the importance of robust, secure software, but whose products in practice disappoint customers. By that time, indeed, customers will long have moved on from preferring dancing pigs to good security. The prevalence of bad news stories – in their daily social media traffic – will transform their appreciation of the steps they need to take to remain as safe as possible. Their priorities will have changed. They’ll be eagerly scouring reports as to which companies have world-class software security, and which companies, on the other hand, have products that should be avoided. Companies in the former camp will eat those in the latter camp.

Complications with software updates

As I mentioned above, there can be security vulnerabilities, not only intrinsic in a given piece of software, but also in how that software is used, deployed, and updated. I’ll finish this article by digging more deeply into the question of software updates. These updates have a particularly important role in the arms race between security vulnerabilities and security improvements.

Software updates are a key part of modern technological life. These updates deliver new functionality to users – such as a new version of a favourite app, or an improved user interface for an operating system. They also deliver security fixes, along with other bug fixes. In principle, as soon as possible after a major security vulnerability has been identified and analysed, the vendor will make available a fix to that programming error.

However, updates are something that many users dislike. On the one hand, they like receiving improved functionality. But they fear on the other hand that:

  • The upgrade will be time-consuming, locking them out of their computer systems at a time when they need to press on with urgent work
  • The upgrade will itself introduce new bugs, and break familiar patterns of how they use the software
  • Some of their applications will stop working, or will work in strange ways, after the upgrade.

The principle of “once bitten, twice shy” applies here. One bad experience with upgrade software – such as favourite add-on applications getting lost in the process – may prejudice users against accepting any new upgrades.

My own laptop recently popped up an invitation for me to reserve a free upgrade from its current operating system – Windows 7.1 – to the forthcoming Windows 10. I confess that I have yet to click the “yes, please reserve this upgrade” button. I fear, indeed, that some of the legacy software on my laptop (including apps that are more than ten years old, and whose vendors no longer exist) will become dysfunctional.

The Android operating system for smartphones faces a similar problem. New versions of the operating system, which include fixes to known security problems, often fail to make their way onto users of Android phones. In some cases, this is because the phones are running a reconfigured version of Android, which includes modifications introduced by a phone manufacturer and/or network operator. Any update has to wait until similar reconfigurations have been applied to the new version of the operating system – and that can take a long time, due to reluctance on the part of the phone manufacturer or network operator. In other cases, it’s simply because users decline to accept an Android upgrade when it is offered to them. Once bitten, twice shy.

Accordingly, there’s competitive advantage available, to any company that makes software upgrades as smooth and reliable as possible. This will become even more significant, as users grow in their awareness of the need to have security vulnerabilities in their computer systems fixed speedily.

But there’s a very awkward problem lurking around the upgrade process. Computer systems can sometimes be tricked into installing malicious software, whilst thinking it is a positive upgrade. In other words, the upgrade process can itself be hacked. For example, at the Black Hat conference in July 2009, IOActive security researcher Mike Davis demonstrated a nasty vulnerability in the software update mechanism in the smart electricity meters that were to be installed in homes throughout the Pacific North West of the United States.

For a riveting behind-the-scenes account of this particular research, see the book Countdown To Zero Day. In brief, Davis found a way to persuade a smart meter that it was being offered a software upgrade by a neighbouring, trusted smart meter, whereas it was in fact receiving software from an external source. This subterfuge was accomplished by extracting the same network encryption key that was hard-wired into every smart meter in the collection, and then presenting that encryption key as apparent (but bogus) evidence that the communication could be trusted. Once the meter had installed the upgrade, the new software could disable the meter from responding to any further upgrades. It could also switch off any electricity supply to the home. As a result, the electricity supplier would be obliged to send engineers to visit every single house that had been affected by the malware. In the simulated demo shown by Davis, this was as many as 20,000 separate houses within just a 24 hour period.

Uncharitably, we might think to ourselves that an electricity supplier is probably the kind of company to make mistakes with its software upgrade mechanism. As Mike Davis put it, “the guys that built this meter had a short-term view of how it would work”. We would expect, in contrast, that a company whose core business was software (and which had been one of the world’s leading software companies for several decades) would have no such glitches in its system for software upgrades.

Unexpectedly, one of the exploits utilised by Stuxnet team was a weakness in part of the Microsoft Update system – a part that had remained unchanged for many years. The exploit was actually used by a piece of malware, known as Flame which shared many characteristics with Stuxnet. Mikko Hyppönen, Chief Research Officer of Finnish antivirus firm F-Secure, reported the shocking news as follows in a corporate blogpost tellingly entitled “Microsoft Update and The Nightmare Scenario”:

About 900 million Windows computers get their updates from Microsoft Update. In addition to the DNS root servers, this update system has always been considered one of the weak points of the net. Antivirus people have nightmares about a variant of malware spoofing the update mechanism and replicating via it.

Turns out, it looks like this has now been done. And not by just any malware, but by Flame…

Flame has a module which appears to attempt to do a man-in-the-middle attack on the Microsoft Update or Windows Server Update Services system. If successful, the attack drops a file called WUSETUPV.EXE to the target computer.

This file is signed by Microsoft with a certificate that is chained up to Microsoft root.

Except it isn’t signed really by Microsoft.

Turns out the attackers figured out a way to misuse a mechanism that Microsoft uses to create Terminal Services activation licenses for enterprise customers. Surprisingly, these keys could be used to also sign binaries…

Having a Microsoft code signing certificate is the Holy Grail of malware writers. This has now happened.

Hyppönen’s article ends with some “good news in the bad news” which nevertheless sounds a strong alarm about similar things going wrong (with worse consequences) in the future:

I guess the good news is that this wasn’t done by cyber criminals interested in financial benefit. They could have infected millions of computers. Instead, this technique has been used in targeted attacks, most likely launched by a Western intelligence agency.

How not to be eaten

Despite the threats that I’ve covered above, I’m optimistic that software security and software updates can be significantly improved in the months and years ahead. In other words, there’s plenty of scope for improvements in the quality of software security.

One reason for this optimism is that I know that smart people have been thinking hard about these topics for many years. Good solutions are already available, ready for wider deployment, in response to stronger market readiness for such solutions.

But it will take more than technology to win this arms race. It will take political resolve. For too long, software companies have been able to ship software that has woefully substandard security. For too long, companies have prioritised dancing pigs over rock-hard security. They’ve written into their software licences that they accept no liability for problems arising from bugs in their software. They’ve followed, sometimes passionately, and sometimes half-heartedly, the motto from Facebook’s Mark Zuckerberg that software developers should “move fast and break things”.

That kind of behaviour may have been appropriate in the infancy of software. No longer.

Move fast and break things

21 May 2015

Anticipating 2040: The triple A, triple h+ vision

Abundance Access Action

The following vision arises from discussions with colleagues in the Transhumanist Party.

TPUK_LOGO3_400pxAbundance

Abundance – sustainable abundance – is just around the corner – provided we humans collectively get our act together.

We have within our grasp a sustainable abundance of renewable energy, material goods, health, longevity, intelligence, creativity, freedom, and positive experience.

This can be attained within one human generation, by wisely accelerating the green technology revolution – including stem cell therapies, 3D printing, prosthetics, robotics, nanotechnology, genetic engineering, synthetic biology, neuro-enhancement, artificial intelligence, and supercomputing.

TPUK_LOGO2_400pxAccess

The rich fruits of technology – abundance – can and should be provided for all, not just for those who manage to rise to the top of the present-day social struggle.

A bold reorganisation of society can and should take place in parallel with the green technology revolution – so that everyone can freely access the education, healthcare, and everything else needed to flourish as a full member of society.

Action

TPUK_LOGO1_400pxTo channel the energies of industry, business, finance, universities, and the media, for a richly positive outcome within the next generation, swift action is needed:

  • Widespread education on the opportunities – and risks – of new technology
  • Regulations and checks to counter short-termist action by incumbent vested interests
  • The celebration and enablement of proactive innovation for the common good
  • The promotion of scientific, rational, evidence-based methods for taking decisions, rather than ideologies
  • Transformation of our democracy so that governance benefits from the wisdom of all of society, and serves the genuine needs of everyone, rather than perpetuating the existing establishment.

Transhumanism 2040

2040Within one generation – 25 years, that is, by 2040 – human society can and should be radically transformed.

This next step of conscious evolution is called transhumanism. Transhumanists see, and welcome, the opportunity to intelligently redesign humanity, drawing wisely on the best resources of existing humanity.

The transhumanist party is the party of abundance, access, and action. It is the party with a programme to transcend (overcome) our ingrained human limitations – limitations of animal biology, primate psychology, antiquated philosophy, and 20th century social structures.

Transhumanism 2020

2020As education spreads about the potential for a transhumanist future of abundance, access, and action – and as tangible transhumanist projects are seen to be having an increasingly positive political impact – more and more people will start to identify themselves as transhumanists.

This growing movement will have consequences around the world. For example, in the general election in 2020 in the UK, there may well be, in every constituency, either a candidate from the Transhumanist Party, or a candidate from one of the other parties who openly and proudly identifies as a transhumanist.

The political landscape will never be the same again.

Call to action

To offer support to the Transhumanist Party in the UK (regardless of where you are based in the world), you can join the party by clicking the following PayPal button:

Join now

Membership costs £25 per annum. Members will be invited to participate in internal party discussions of our roadmap.

For information about the Transhumanist Party in other parts of the world, see http://transhumanistpartyglobal.org/.

For a worldwide transhumanist network without an overt political angle, consider joining Humanity+.

To discuss the politics of the future, without any exclusive link to the Transhumanist Party, consider participating in one of the Transpolitica projects – for example, the project to publish the book “Politics 2.0”.

Anticipating the Transhumanist Party roadmap to 2040

Footnote: Look out for more news of a conference to be held in London during Autumn (*), entitled “Anticipating 2040: The Transhumanist Party roadmap”, featuring speakers, debates, open plenaries, and closed party sessions.

If anyone would like to speak at this event, please get in touch.

Anticipating 2040
(*) Possible date is 3-4 October 2015, though planning is presently at a preliminary stage.

 

« Newer PostsOlder Posts »

Create a free website or blog at WordPress.com.