dw2

25 October 2015

Getting better at anticipating the future

History is replete with failed predictions. Sometimes pundits predict too much change. Sometimes they predict too little. Frequently they predict the wrong kinds of change.

Even those forecasters who claim a good track record for themselves sometime turn out, on closer inspection, to have included lots of wiggle room in their predictions – lots of scope for creative reinterpretation of their earlier words.

Of course, forecasts are often made for purposes other than anticipating the events that will actually unfold. Forecasts can serve many other goals:

  • Raising the profile of the forecaster and potentially boosting book sales or keynote invites – especially if the forecast is memorable, and is delivered in a confident style
  • Changing the likelihood that an event predicted will occur – either making it more likely (if the prediction is enthusiastic), or making it less likely (if the prediction is fearful)
  • Helping businesses and organisations to think through some options for their future strategy, via “scenario analysis”.

Given these alternative reasons why forecasters make predictions, it perhaps becomes more understandable that little effort is made to evaluate the accuracy of past forecasts. As reported by Alex Mayyasi,

Organizations spend staggering amounts of time and money trying to predict the future, but no time or money measuring their accuracy or improving on their ability to do it.

This bizarre state of affairs may be understandable, but it’s highly irresponsible, none the less. We can, and should, do better. In a highly uncertain, volatile world, our collective future depends on improving our ability to anticipate forthcoming developments.

Philip Tetlock

Mayyasi was referring to research by Philip Tetlock, a professor at the University of Pennsylvania. Over three decades, Tetlock has accumulated huge amounts of evidence about forecasting. His most recent book, co-authored with journalist Dan Gardner, is a highly readable summary of his research.

The book is entitled “Superforecasting: The Art and Science of Prediction”. I wholeheartedly recommend it.

Superforecasting

The book carries an endorsement by Nobel laureate Daniel Kahneman:

A manual for thinking clearly in an uncertain world. Read it.

Having just finished this book, I echo the praise it has gathered. The book is grounded in the field of geopolitical forecasting, but its content ranges far beyond that starting point. For example, the book can be viewed as one of the best descriptions of the scientific method – with its elevation of systematic, thoughtful doubt, and its search for ways to reduce uncertainty and eliminate bias. The book also provides a handy summary of all kinds of recent findings about human thinking methods.

“Superforecasting” also covers the improvements in the field of medicine that followed from the adoption of evidence-based medicine (in the face, it should be remembered, of initial fierce hostility from the medical profession). Indeed, the book seeks to accelerate a similar evidence-based revolution in the fields of economic and political analysis. It even has hopes to reduce the level of hostility and rancour that tends to characterise political discussion.

As such, I see the book as making an important contribution to the creation of a better sort of politics.

Summary of “Superforecasting”

The book draws on:

  • Results from four years of online competitions for forecasters held under the Aggregative Contingent Estimation project of IARPA (Intelligence Advanced Research Projects Activity)
  • Reflections from contest participants whose persistently scored highly in the competition – people who became known as ‘superforecasters’
  • Insight from the Good Judgement Project co-created by Tetlock
  • Reviews of the accuracy of predictions made publicly by politicians, political analysts, and media figures
  • Other research into decision-making, cognitive biases, and group dynamics.

Forecasters and superforecasters from the Good Judgement Project submitted more than 10,000 predictions over four years in response to questions about the likelihood of specified outcomes happening within given timescales over the following 3-12 months. Forecasts addressed the fields of geopolitics and economics.

The book highlights the following characteristics as being the cause of the success of superforecasters:

  • Avoidance of taking an ideological approach, which restricts the set of information that the forecaster considers
  • Pursuit of an evidence-based approach
  • Willingness to search out potential sources of disconfirming evidence
  • Willingness to incrementally adjust forecasts in the light of new evidence
  • The ability to break down estimates into a series of constituent questions that can, individually, be more easily calculated
  • The desire to obtain several different perspectives on a question, which can then be combined into an aggregate viewpoint
  • Comfort with mathematical and probabilistic reasoning
  • Adoption of careful, precise language, rather than vague terms (such as “might”) whose apparent meaning can change with hindsight
  • Acceptance of contingency rather than ‘fate’ or ‘inevitability’ as being the factor responsible for outcomes
  • Avoidance of ‘groupthink’ in which undue respect among team members prevents sufficient consideration of alternative viewpoints
  • Willingness to learn from past forecasting experiences – including both successes and failures
  • A growth mindset, in which personal characteristics and skill are seen as capable of improvement, rather than being fixed.

(This section draws on material I’ve added to H+Pedia earlier today. See that article for some links to further reading.)

Human pictures

Throughout “Superforecasting”, the authors provide the human backgrounds of the forecasters whose results and methods feature in the book. The superforecasters have a wide variety of backgrounds and professional experience. What they have in common, however – and where they differ from the other contest participants, whose predictions were less stellar – is the set of characteristics given above.

The book also discusses a number of well-known forecasters, and dissects the causes of their forecasting failures. This includes 9/11, the wars in Iraq, the Cuban Bay of Pigs fiasco, and many more. There’s much to learn from all these examples.

Aside: Other ways to evaluate futurists

Australian futurist Ross Dawson has recently created a very different method to evaluate the success of futurists. As Ross explains at http://rossdawson.com/futurist-rankings/:

We have created this widget to provide a rough view of how influential futurists are on the web and social media. It is not intended to be rigorous but it provides a fun and interesting insight into the online influence of leading futurists.

The score is computed from the number of Twitter followers, the Alexa score of websites, and the general Klout metric.

The widget currently lists 152 futurists. I was happy to find my name at #53 on the list. If I finish writing the two books I have in mind to publish over the next 12 months, I expect my personal ranking to climb 🙂

Yet another approach is to take a look at http://future.meetup.com/, the listing (by size) of the Meetup groups around the world that list “futurism” (or similar) as one of their interests. London Futurists, which I’ve been running (directly and indirectly) over the last seven years, features in third place on that list.

Of course, we futurists vary in the kind of topics we are ready (and willing) to talk to audiences abound. In my own case, I wish to encourage audiences away from “slow-paced” futurism, towards serious consideration of the possibilities of radical changes happening within just a few decades. These changes include not just the ongoing transformation of nature, but the possible transformation of human nature. As such, I’m ready to introduce the topic of transhumanism, so that audiences become more aware of the arguments both for and against this philosophy.

Within that particular subgrouping of futurist meetups, London Futurists ranks as a clear #1, as can be seen from http://transhumanism.meetup.com/.

Footnote

Edge has published a series of videos of five “master-classes” taught by Philip Tetlock on the subject of superforecasting:

  1. Forecasting Tournaments: What We Discover When We Start Scoring Accuracy
  2. Tournaments: Prying Open Closed Minds in Unnecessarily Polarized Debates
  3. Counterfactual History: The Elusive Control Groups in Policy Debates
  4. Skillful Backward and Forward Reasoning in Time: Superforecasting Requires “Counterfactualizing”
  5. Condensing it All Into Four Big Problems and a Killer App Solution

I haven’t had the time to view them yet, but if they’re anything like as good as the book “Superforecasting”, they’ll be well worth watching.

10 October 2015

Technological unemployment – Why it’s different this time

On Tuesday last week I joined members of “The Big Potatoes” for a spirited discussion entitled “Automation Anxiety”. Participants became embroiled in questions such as:

  • To what extent will increasingly capable automation (robots, software, and AI) displace humans from the workforce?
  • To what extent should humans be anxious about this process?

The Big Potatoes website chose an image from the marvellously provocative Channel 4 drama series “Humans” to set the scene for the discussion:

Channel4_HumansAdvertisingHoarding-440x293

“Closer to humans” than ever before, the fictional advertisement says, referring to humanoid robots with multiple capabilities. In the TV series, many humans became deeply distressed at the way their roles are being usurped by these new-fangled entities.

Back in the real world, many critics reject these worries. “We’ve heard it all before”, they assert. Every new wave of technological automation has caused employment disruption, yes, but it has also led to new types of employment. The new jobs created will compensate for the old ones destroyed, the critics say.

I see these critics as, most likely, profoundly mistaken. This time things are different. That’s because of the general purpose nature of ongoing improvements in the algorithms for automation. Machine learning algorithms that are developed with one set of skills in mind turn out to fit, reasonably straightforwardly, into other sets of skills as well.

The master algorithm

That argument is spelt out in the recent book “The master algorithm” by University of Washington professor of computer science and engineering Pedro Domingos.

TheMasterAlgorithm

The subtitle of that book refers to a “quest for the ultimate learning machine”. This ultimate learning machine can be contrasted with another universal machine, namely the universal Turing machine:

  • The universal Turing machine accepts inputs and applies a given algorithm to compute corresponding outputs
  • The universal learning machine accepts a set of corresponding input and output data, and makes the best possible task of inferring the algorithm that would obtain the outputs from the inputs.

For example, given sets of texts written in English, and matching texts written in French, the universal learning machine would infer an algorithm that will convert English into French. Given sets of biochemical reactions of various drugs on different cancers, the universal learning machine would infer an algorithm to suggest the best treatment for any given cancer.

As Domingos explains, there are currently five different “tribes” within the overall machine learning community. Each tribe has its separate origin, and also its own idea for the starting point of the (future) master algorithm:

  • “Symbolists” have their origin in logic and philosophy; their core algorithm is “inverse deduction”
  • “Connectionists” have their origin in neuroscience; their core algorithm is “back-propagation”
  • “Evolutionaries” have their origin in evolutionary biology; their core algorithm is “genetic programming”
  • “Bayesians” have their origin in statistics; their core algorithm is “probabilistic inference”
  • “Analogizers” have their origin in psychology; their core algorithm is “kernel machines”.

(See slide 6 of this Slideshare presentation. Indeed, take the time to view the full presentation. Better again, read Domingos’ entire book.)

What’s likely to happen over the next decade, or two, is that a single master algorithm will emerge that unifies all the above approaches – and, thereby, delivers great power. It will be similar to the progress made by physics as the fundamental force of natures have gradually been unified into a single theory.

And as that unification progresses, more and more occupations will be transformed, more quickly than people generally expect. Technological unemployment will rise and rise, as software embodying the master algorithm handles tasks previously thought outside the scope of automation.

Incidentally, Domingos has set out some ambitious goals for what his book will accomplish:

The goal is to do for data science what “Chaos” [by James Gleick] did for complexity theory, or “The Selfish Gene” [by Richard Dawkins] for evolutionary game theory: introduce the essential ideas to a broader audience, in an entertaining and accessible way, and outline the field’s rich history, connections to other fields, and implications.

Now that everyone is using machine learning and big data, and they’re in the media every day, I think there’s a crying need for a book like this. Data science is too important to be left just to us experts! Everyone – citizens, consumers, managers, policymakers – should have a basic understanding of what goes on inside the magic black box that turns data into predictions.

People who comment about the likely impact of automation on employment would do particularly well to educate themselves about the ideas covered by Domingos.

Rise of the robots

There’s a second reason why “this time it’s different” as regards the impact of new waves of automation on the employment market. This factor is the accelerating pace of technological change. As more areas of industry become subject to digitisation, they become, at the same time, subject to automation.

That’s one of the arguments made by perhaps the best writer so far on technological unemployment, Martin Ford. Ford’s recent book “Rise of the Robots: Technology and the Threat of a Jobless Future” builds ably on what previous writers have said.

RiseofRobots

Here’s a sample of review comments about Ford’s book:

Lucid, comprehensive and unafraid to grapple fairly with those who dispute Ford’s basic thesis, Rise of the Robots is an indispensable contribution to a long-running argument.
Los Angeles Times

If The Second Machine Age was last year’s tech-economy title of choice, this book may be 2015’s equivalent.
Financial Times, Summer books 2015, Business, Andrew Hill

[Ford’s] a careful and thoughtful writer who relies on ample evidence, clear reasoning, and lucid economic analysis. In other words, it’s entirely possible that he’s right.
Daily Beast

Surveying all the fields now being affected by automation, Ford makes a compelling case that this is an historic disruption—a fundamental shift from most tasks being performed by humans to one where most tasks are done by machines.
Fast Company

Well-researched and disturbingly persuasive.
Financial Times

Martin Ford has thrust himself into the center of the debate over AI, big data, and the future of the economy with a shrewd look at the forces shaping our lives and work. As an entrepreneur pioneering many of the trends he uncovers, he speaks with special credibility, insight, and verve. Business people, policy makers, and professionals of all sorts should read this book right away—before the ‘bots steal their jobs. Ford gives us a roadmap to the future.
—Kenneth Cukier, Data Editor for the Economist and co-author of Big Data: A Revolution That Will Transform How We Live, Work, and Think

Ever since the Luddites, pessimists have believed that technology would destroy jobs. So far they have been wrong. Martin Ford shows with great clarity why today’s automated technology will be much more destructive of jobs than previous technological innovation. This is a book that everyone concerned with the future of work must read.
—Lord Robert Skidelsky, Emeritus Professor of Political Economy at the University of Warwick, co-author of How Much Is Enough?: Money and the Good Life and author of the three-volume biography of John Maynard Keynes

If you’re still not convinced, I recommend that you listen to this audio podcast of a recent event at London’s RSA, addressed by Ford.

I summarise the takeaway message in this picture, taken from one of my Delta Wisdom workshop presentations:

Tech unemployment curves

  • Yes, humans can retrain over time, to learn new skills, in readiness for new occupations when their former employment has been displaced by automation
  • However, the speed of improvement of the capabilities of automation will increasingly exceed that of humans
  • Coupled with the general purpose nature of these capabilities, it means that, conceivably, from some time around 2040, very few humans will be able to find paid work.

A worked example: a site carpenter

During the Big Potatoes debate on Tuesday, I pressed the participants to name an occupation that would definitely be safe from incursion by robots and automation. What jobs, if any, will robots never be able to do?

One suggestion that came back was “site carpenter”. In this thinking, unfinished buildings are too complex, and too difficult for robots to navigate. Robots who try to make their way through these buildings, to tackle carpentry tasks, will likely fall down. Or assuming they don’t fall down, how will they cope with finding out that the reality in the building often varies sharply from the official specification? These poor robots will try to perform some carpentry task, but will get stymied when items are in different places from where they’re supposed to be. Or have different tolerances. Or alternatives have been used. Etc. Such systems are too messy for robots to compute.

My answer is as follows. Yes, present-day robots currently often do fall down. Critics seem to find this hilarious. But this is pretty similar to the fact that young children often fall down, while learning to walk. Or novice skateboarders often fall down, when unfamiliar with this mode of transport. However, robots will learn fast. One example is shown in this video, of the “Atlas” humanoid robot from Boston Dynamics (now part of Google):

As for robots being able to deal with uncertainty and surprises, I’m frankly struck by the naivety of this question. Of course software can deal with uncertainty. Software calculates courses of action statistically and probabilistically, the whole time. When software encounters information at variance from what it previously expected, it can adjust its planned course of action. Indeed, it can take the same kinds of steps that a human would consider – forming new hypotheses, and, when needed, checking back with management for confirmation.

The question is a reminder to me that the software and AI community need to do a much better job to communicate the current capabilities of their field, and the likely improvements ahead.

What does it mean to be human?

For me, the most interesting part of Tuesday’s discussion was when it turned to the following questions:

  • Should these changes be welcomed, rather than feared?
  • What will these forthcoming changes imply for our conception of what it means to be human?

To my mind, technological unemployment will force us to rethink some of the fundamentals of the “protestant work ethic” that permeates society. That ethic has played a decisive positive role for the last few centuries, but that doesn’t mean we should remain under its spell indefinitely.

If we can change our conceptions, and if we can manage the resulting social transition, the outcome could be extremely positive.

Some of these topics were aired at a conference in New York City on 29th September: “The World Summit on Technological Unemployment”, that was run by Jim Clark’s World Technology Network.

Robotic Steel Workers

One of the many speakers at that conference, Scott Santens, has kindly made his slides available, here. Alongside many graphs on the increasing “winner takes all” nature of modern employment (in which productivity increases but median income declines), Santens offers a different way of thinking about how humans should be spending their time:

We are not facing a future without work. We are facing a future without jobs.

There is a huge difference between the two, and we must start seeing the difference, and making the difference more clear to each other.

In his blogpost “Jobs, Work, and Universal Basic Income”, Santens continues the argument as follows:

When you hate what you do as a job, you are definitely getting paid in return for doing it. But when you love what you do as a job or as unpaid work, you’re only able to do it because of somehow earning sufficient income to enable you to do it.

Put another way, extrinsically motivated work is work done before or after an expected payment. It’s an exchange. Intrinsically motivated work is work only made possible by sufficient access to money. It’s a gift.

The difference between these two forms of work cannot be overstated…

Traditionally speaking, most of the work going on around us is only considered work, if one gets paid to do it. Are you a parent? Sorry, that’s not work. Are you in paid childcare? Congratulations, that’s work. Are you an open source programmer? Sorry, that’s not work. Are you a paid software engineer? Congratulations, that’s work…

What enables this transformation would be some variant of a “basic income guarantee” – a concept that is introduced in the slides by Santens, and also in the above-mentioned book by Martin Ford. You can hear Ford discuss this option in his RSA podcast, where he ably handles a large number of questions from the audience.

What I found particularly interesting from that podcast was a comment made by Anthony Painter, the RSA’s Director of Policy and Strategy who chaired the event:

The RSA will be advocating support for Basic Income… in response to Technological Unemployment.

(This comment comes about 2/3 of the way through the podcast.)

To be clear, I recognise that there will be many difficulties in any transition from the present economic situation to one in which a universal basic income applies. That transition is going to be highly challenging to manage. But these problems of transition are a far better problem to have, than dealing with the consequences of vastly increased unpaid unemployment and social alienation.

Life is being redefined

Just in case you’re still tempted to dismiss the above scenarios as some kind of irresponsible fantasy, there’s one more resource you might like to consult. It’s by Janna Q. Anderson, Professor of Communications at Elon University, and is an extended write-up of a presentation I heard her deliver at the World Future 2015 conference in San Francisco this July.

Janna Anderson keynote

You can find Anderson’s article here. It starts as follows:

The Robot Takeover is Already Here

The machines that replace us do not have to have superintelligence to execute a takeover with overwhelming impacts. They must merely extend as they have been, rapidly becoming more and more instrumental in our essential systems.

It’s the Algorithm Age. In the next few years humans in most positions in the world of work will be nearly 100 percent replaced by or partnered with smart software and robots —’black box’ invisible algorithm-driven tools. It is that which we cannot see that we should question, challenge and even fear the most. Algorithms are driving the world. We are information. Everything is code. We are becoming dependent upon and even merging with our machines. Advancing the rights of the individual in this vast, complex network is difficult and crucial.

The article is described as being a “45 minute read”. In turn, it contains numerous links, so you could spend lots longer following the resulting ideas. In view of the momentous consequences of the trends being discussed, that could prove to be a good use of your time.

By way of summary, I’ll pull out a few sentences from the middle of the article:

One thing is certain: Employment, as it is currently defined, is already extremely unstable and today many of the people who live a life of abundance are not making nearly enough of an effort yet to fully share what they could with those who do not…

It’s not just education that is in need of an overhaul. A primary concern in this future is the reinvention of humans’ own perceptions of human value…

[Another] thing is certain: Life is being redefined.

Who controls the robots?

Despite the occasional certainty in this field (as just listed above, extracted from the article by Janna Anderson), there remains a great deal of uncertainty. I share with my Big Potatoes colleagues the viewpoint that technology does not determine social responses. The question of which future scenario will unfold isn’t just a question of cheer-leading (if you’re an optimist) or cowering (if you’re a pessimist). It’s a question of choice and action.

That’s a theme I’ll be addressing next Sunday, 18th October, at a lunchtime session of the 2015 Battle of Ideas. The session is entitled “Man vs machine: Who controls the robots”.

robots

Here’s how the session is described:

From Metropolis through to recent hit film Ex Machina, concerns about intelligent robots enslaving humanity are a sci-fi staple. Yet recent headlines suggest the reality is catching up with the cultural imagination. The World Economic Forum in Davos earlier this year hosted a serious debate around the Campaign to Stop Killer Robots, organised by the NGO Human Rights Watch to oppose the rise of drones and other examples of lethal autonomous warfare. Moreover, those expressing the most vocal concerns around the march of the robots can hardly be dismissed as Luddites: the Elon-Musk funded and MIT-backed Future of Life Institute sparked significant debate on artificial intelligence (AI) by publishing an open letter signed by many of the world’s leading technologists and calling for robust guidelines on AI research to ‘avoid potential pitfalls’. Stephen Hawking, one of the signatories, has even warned that advancing robotics could ‘spell the end of the human race’.

On the other hand, few technophiles doubt the enormous potential benefits of intelligent robotics: from robot nurses capable of tending to the elderly and sick through to the labour-saving benefits of smart machines performing complex and repetitive tasks. Indeed, radical ‘transhumanists’ openly welcome the possibility of technological singularity, where AI will become so advanced that it can far exceed the limitations of human intelligence and imagination. Yet, despite regular (and invariably overstated) claims that a computer has managed to pass the Turing Test, many remain sceptical about the prospect of a significant power shift between man and machine in the near future…

Why has this aspect of robotic development seemingly caught the imagination of even experts in the field, when even the most remarkable developments still remain relatively modest? Are these concerns about the rise of the robots simply a high-tech twist on Frankenstein’s monster, or do recent breakthroughs in artificial intelligence pose new ethical questions? Is the question more about from who builds robots and why, rather than what they can actually do? Does the debate reflect the sheer ambition of technologists in creating smart machines or a deeper philosophical crisis in what it means to be human?

 As you can imagine, I’ll be taking serious issue with the above claim, from the session description, that progress with robots will “remain relatively modest”. However, I’ll be arguing for strong focus on questions of control.

It’s not just a question of whether it’s humans or robots that end up in control of the planet. There’s a critical preliminary question as to which groupings and systems of humans end up controlling the evolution of robots, software, and automation. Should we leave this control to market mechanisms, aided by investment from the military? Or should we exert a more general human control of this process?

In line with my recent essay “Four political futures: which will you choose?”, I’ll be arguing for a technoprogressive approach to control, rather than a technolibertarian one.

Four futures

I wait with interest to find out how much this viewpoint will be shared by the other speakers at this session:

15 September 2015

A wiser journey to a better Tomorrowland

Peter Drucker quote

Three fine books that I’ve recently had the pleasure to finish reading all underscore, in their own ways, the profound insight expressed in 1970 by management consultant Peter Drucker:

The major questions regarding technology are not technical but human questions.

That insights sits alongside the observation that technology has been an immensely important driver of change in human history. The technologies of agriculture, steam, electricity, medicine, and information, to name only a few, have led to dramatic changes in the key metrics in human civilisation – metrics such as population, travel, consumption, and knowledge.

But the best results of technology typically depend upon changes happening in parallel in human practice. Indeed, new general purpose technology sometimes initially results, not in an increase of productivity, but in an apparent decline.

The productivity paradox

Writing in Forbes earlier this year, in an article about the “current productivity paradox in healthcare”, Roy Smythe makes the following points:

There were two previous slowdowns in productivity that were not anticipated, and caused great consternation – the adoption of electricity and the computer. The issues at hand with both were the protracted time it took to diffuse the technology, the problem of trying to utilize the new technology alongside the pre-existing technology, and the misconception that the new technology should be used in the same context as the older one.

Although the technology needed to electrify manufacturing was available in the early 1890s, it was not fully adopted for about thirty years. Many tried to use the technology alongside or in conjunction with steam-driven engines – creating all manner of work-flow challenges, and it took some time to understand that it was more efficient to use electrical wires and peripheral, smaller electrical motors (dynamos) than to connect centrally-located large dynamos to the drive shafts and pulleys necessary to disperse steam-generated power. The sum of these activities resulted in a significant, and unanticipated lag in productivity in industry between 1890 and 1920…

However, in time, these new GPTs (general purpose technologies) did result in major productivity gains:

The good news, however, is substantial. In the two decades following the adoption of both electricity and the computer, significant acceleration of productivity was enjoyed. The secret was in the ability to change the context (in the case of the dynamo, taking pulleys down for example) assisting in a complete overhaul of the business process and environment, and the spawning of the new processes, tools and adjuncts that capitalized on the GPT.

In other words, the new general purpose technologies yielded the best results, not when humans were trying to follow the same processes as before, but when new processes, organisational models, and culture were adopted. These changes took time to conceive and adopt. Indeed, the changes took not only time but wisdom.

Wachter Kotler Naam

The Digital Doctor

Robert Wachter’s excellent book “The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age” provides a dazzling analysis of the ways in which the computerisation of health records – creating so-called EHRs (Electronic Health Records) – is passing through a similar phase of disappointing accomplishment. EHRs are often associated with new kinds of errors, with additional workload burdens, and with interfering in the all-important human relationship between doctor and patient. They’re far from popular with healthcare professionals.

Wachter believes these problems to be temporary: EHRs will live up to their promise in due course. But only once people can set the hype aside. What’s needed is that designers of healthcare tech products and systems will:

  • Put a much higher priority on ease of use, simplifying usage patterns, and on redesigning the overall flow of activity
  • Recognise and deal with the multiple complexities of the world of medicine.

For a good flavour of Wachter’s viewpoint, consider this extract from a New York Times opinion article he wrote in March, “Why Health Care Tech Is Still So Bad”,

Last year, I saw an ad recruiting physicians to a Phoenix-area hospital. It promoted state-of-the-art operating rooms, dazzling radiology equipment and a lovely suburban location. But only one line was printed in bold: “No E.H.R.”

In today’s digital era, a modern hospital deemed the absence of an electronic medical record system to be a premier selling point.

That hospital is not alone…

I interviewed Boeing’s top cockpit designers, who wouldn’t dream of green-lighting a new plane until they had spent thousands of hours watching pilots in simulators and on test flights. This principle of user-centered design is part of aviation’s DNA, yet has been woefully lacking in health care software design.

Our iPhones and their digital brethren have made computerization look easy, which makes our experience with health care technology doubly disappointing. An important step is admitting that there is a problem, toning down the hype, and welcoming thoughtful criticism, rather than branding critics as Luddites.

In my research, I found humility in a surprising place: the headquarters of I.B.M.’s Watson team, the people who built the computer that trounced the “Jeopardy!” champions. I asked the lead engineer of Watson’s health team, Eric Brown, what the equivalent of the “Jeopardy!” victory would be in medicine. I expected him to describe some kind of holographic physician, like the doctor on “Star Trek Voyager,” with Watson serving as the cognitive engine. His answer, however, reflected his deep respect for the unique challenges of health care. “It’ll be when we have a technology that physicians suddenly can’t live without,” he said.

I’m reminded of a principle I included in a long-ago presentation, “Enabling simply great mobile phones” (PDF), from 2004:

It’s easy to make something hard;
It’s hard to make something easy…

Smartphones will sell very well provided they allow users to build on, and do more of, the things that caused users to buy phones in the first place (communication and messaging, fashion and fun, and safety and connection) – and provided they allow users to do these things simply, even though the phones themselves are increasingly complex.

As for smartphones, so also for healthcare technology: the interfaces need to protect users from the innumerable complications that lurk under the surface. The greater the underlying complexity, the greater the importance of smart interfaces.

Again as for smartphones, once good human interfaces have been put in place, the results of new healthcare technology can be enormous. The New York Times article by Wachter contains a reminder of vexed issues within healthcare – issues that technology has the power to solve:

Health care, our most information-intensive industry, is plagued by demonstrably spotty quality, millions of errors and backbreaking costs. We will never make fundamental improvements in our system without the thoughtful use of technology.

Tomorrowland

In a different way, Steven Kotler’s new book also brings human considerations to the forefront. The title of the book is “Tomorrowland: Our Journey from Science Fiction to Science Fact”. It’s full of remarkable human interest stories, that go far beyond simple cheer-leading for the potential of technological progress.

I had the pleasure to help introduce Steven at a recent event in Campus London, which was co-organised by London Futurists and FutureSelf. Steven appeared by Skype.

AtCampusLondon

(photos by Kirsten Zverina)

Ahead of the event, I had hoped to be able to finish reading his book, but because of other commitments I had only managed to read the first 25%. That was already enough to convince me that the book departed from any simple formula of techno-optimism.

In the days after the event, I was drawn back to Kotler’s book time and again, as I kept discovering new depth in its stories. Kotler brings a journalist perspective to the hopes, fears, struggles, and (yes) remarkable accomplishments of many technology pioneers. For most of these stories, the eventual outcome is still far from clear. Topics covered included:

  • The difficulties in trying to save the Florida Everglades from environmental collapse
  • Highlights from the long saga of people trying to invent flying cars (you can read that excerpt online here)
  • Difficulties and opportunities with different kinds of nuclear energy
  • The potential for technology to provide quick access to the profound feelings of transcendence reported from so-called “out of the body” and “near death experiences”
  • Some unexpected issues with the business of sperm donation
  • Different ways to enable blind people to see
  • Some missed turnings in the possibilities to use psychedelic drugs more widely
  • Options to prevent bio-terrorists from developing pathogens that are targeted at particular individuals.

There’s a video preview for the book:

The preview is a bit breathless for my liking, but the book as a whole provides some wonderfully rounded explorations. The marvellous potential of new technology should, indeed, inspire awe. But that potential won’t be attained without some very clear thinking.

Apex

The third of the disparate trio of three books I want to mention is, itself, the third in a continuous trilogy of fast-paced futurist fiction by Ramez Naam.

In “Apex: Connect”, Naam brings to a climactic culmination the myriad chains of human and transhuman drama that started in “Nexus: Install” and ratcheted in “Crux: Upgrade”.

RamezNaamTrilogy

Having been enthralled by the first two books in this trilogy, I was nervous about starting to listen to the third, since I realised it would likely absorb me for most of the next few days. I was right – but the absorption was worth it.

There’s plenty of technology in this trilogy, which is set several decades in the future: enhanced bodies, enhanced minds, enhanced communications, enhanced artificial intelligence. Critically, there is plenty of human  frailty too: people with cognitive biases, painful past experiences, unbalanced perspectives, undue loyalty to doubtful causes. Merely the fact of more powerful technology doesn’t automatically make people kinder as well as stronger, or wiser as well as smarter.

Another reason I like Apex so much is because it embraces radical uncertainty. Will superintelligence be a force that enhances humanity, or destroys it? Are regulations for new technology an instrument of oppression, or a means to guide people to more trustworthy outcomes? Should backdoors be built into security mechanisms? How should humanity treat artificial general intelligence, to avoid that AGI reaching unpleasant conclusions?

To my mind, too many commentators (in the real world) have pat answers to these questions. They’re too ready to assert that the facts of the matter are clear, and that the path to a better Tomorrowland is evident. But the drama that unfolds in Apex highlights rich ambiguities. These ambiguities require careful thought and wide appreciation. They also require human focus.

Postscript: H+Pedia

In between my other projects, I’m trying to assemble some of the best thinking on the pros and cons of key futurist questions. My idea is to use the new site H+Pedia for that purpose.

hpluspedia

As a starter, see the page on Transhumanism, where I’ve tried to assemble the most important lines of argument for and against taking a transhumanist stance towards the future. The page includes some common lines of criticism of transhumanism, and points out:

  • Where these criticisms miss the mark
  • Where these criticisms have substance – so that transhumanists ought to pay attention.

In some cases, I offer clear-cut conclusions. But in other cases, the balance of the argument is ambiguous. The future is far from being set in stone.

I’ll welcome constructive contributions to H+Pedia from anyone interested in the future of humanity.

Second postscript:

It’s now less than three weeks to the Anticipating 2040 event, where many speakers will be touching on the themes outlined above. Here’s a 90 second preview of what attendees can expect.

11 June 2015

Eating the world – the growing importance of software security

Security is eating the world

In August 2011, Marc Andreessen famously remarked that “software is eating the world”. Writing in the Wall Street Journal, Andreessen set out his view that society was “in the middle of a dramatic and broad technological and economic shift in which software companies are poised to take over large swathes of the economy”.

With his background as pioneering web software architect at Netscape, and with a string of successful investments under his belt at venture capital firm Andreessen-Horowitz, Andreessen was well placed to comment on the potency of software. As he observed,

More and more major businesses and industries are being run on software and delivered as online services—from movies to agriculture to national defence. Many of the winners are Silicon Valley-style entrepreneurial technology companies that are invading and overturning established industry structures.

He then made the following prediction:

Over the next 10 years, I expect many more industries to be disrupted by software, with new world-beating Silicon Valley companies doing the disruption in more cases than not.

Industries to be impacted in this way, Andreessen suggested, would include entertainment, communications, recruitment, automotive, retail, energy, agriculture, finance, healthcare, education, and defence.

In the four years since the phrase was coined, “software is eating the world” has shown every sign of being a profound truth. In more and more sectors of industry, companies that lack deep expertise in software have found themselves increasingly by-passed by competitors. Software skills are no longer a “nice-to have” optional extra. They’re core to numerous aspects of product development.

But it’s time to propose a variant to the original phrase. A new set of deep skills are going to prove themselves as indispensable for ever larger numbers of industries. This time, the skills are in security. Before long, security will be eating the world. Companies whose software systems fall short on security will be driven out of business.

Dancing pigs

My claim about the growing importance of security may appear to fly in opposition to a general principle of user behaviour. This principle was described by renowned security writer Bruce Schneier in his 2000 book “Secrets and Lies”:

If J. Random Websurfer clicks on a button that promises dancing pigs on his computer monitor, and instead gets a hortatory message describing the potential dangers of the applet — he’s going to choose dancing pigs over computer security any day. If the computer prompts him with a warning screen like: “The applet DANCING PIGS could contain malicious code that might do permanent damage to your computer, steal your life’s savings, and impair your ability to have children,” he’ll click OK without even reading it. Thirty seconds later he won’t even remember that the warning screen even existed.

In other words, despite whatever users may say about the importance of security when directly asked about that question (“yes, of course I take security seriously”), in practice they put a higher priority on watching animated graphics (of flying pigs, cute kittens, celebrity wardrobe malfunctions, or whatever), and readily accept security risks in pursuit of that goal.

A review paper (PDF) published in 2009 by Cormac Herley of Microsoft Research shared findings that supported this view. Herley reports that, for example, users still typically choose the weakest passwords they can get away with, rather than making greater efforts to keep their passwords unguessable. Users also frequently ignore the advice against re-using the same passwords on different sites (so that, if there’s a security problem with any one of these sites, the user’s data on all other sites becomes vulnerable too).

Herley comments:

There are several ways of viewing this. A traditional view is that users are hopelessly lazy: in the face of dire descriptions of the threat landscape and repeated warnings, they do the minimum possible…

But by the end of his review, he offers a more sympathetic assessment:

“Given a choice between dancing pigs and security, users will pick dancing pigs every time.” While amusing, this is unfair: users are never offered security, either on its own or as an alternative to anything else. They are offered long, complex and growing sets of advice, mandates, policy updates and tips… We have shown that much of this advice does nothing to make users more secure, and some of it is harmful in its own right. Security is not something users are offered and turn down. What they are offered and do turn down is crushingly complex security advice that promises little and delivers less.

Herley’s paper concludes:

How can we help users avoid harm? This begins with a clear understanding of the actual harms they face, and a realistic understanding of their constraints. Without these we are proceeding blindly.

Exponential change

What are the “actual harms” that users face, as a result of insecure software systems or poor personal security habits?

We live in a time of rapid technology change. As software eats the world, it leaves more and more aspects of the world vulnerable to problems in the software – and vulnerable to problems in how that software is used, deployed, and updated.

As a result, the potential harm to users from poor security is constantly increasing. Users are vulnerable in new ways that they had never considered before.

Hacking embedded medical devices

For example, consider one possible unexpected side-effect of being fitted with one of the marvels of modern technology, an implantable heart pacemaker. Security researcher Barnaby Jack of IOActive gave a devastating demo at the Breakpoint conference in October 2012 of how easy it was for an outsider to interfere with the system whereby a pacemaker can be wirelessly recalibrated. The result is summed up in this Computerworld headline, “Pacemaker hack can deliver deadly 830-volt jolt”:

The flaw lies with the programming of the wireless transmitters used to give instructions to pacemakers and implantable cardioverter-defibrillators (ICDs), which detect irregular heart contractions and deliver an electric shock to avert a heart attack.

A successful attack using the flaw “could definitely result in fatalities,” said Jack…

In a video demonstration, Jack showed how he could remotely cause a pacemaker to suddenly deliver an 830-volt shock, which could be heard with a crisp audible pop.

Hacking vehicle control systems

Consider also the predicament that many car owners in Austin, Texas experienced, as a result of the actions of a disgruntled former employee of used car retail firm Texas Auto Center. As Wired reported,

More than 100 drivers in Austin, Texas found their cars disabled or the horns honking out of control, after an intruder ran amok in a web-based vehicle-immobilization system normally used to get the attention of consumers delinquent in their auto payments.

Police with Austin’s High Tech Crime Unit on Wednesday arrested 20-year-old Omar Ramos-Lopez, a former Texas Auto Center employee who was laid off last month, and allegedly sought revenge by bricking the cars sold from the dealership’s four Austin-area lots.

Texas Auto Center had included some innovative new technology in the cars they sold:

The dealership used a system called Webtech Plus as an alternative to repossessing vehicles that haven’t been paid for. Operated by Cleveland-based Pay Technologies, the system lets car dealers install a small black box under vehicle dashboards that responds to commands issued through a central website, and relayed over a wireless pager network. The dealer can disable a car’s ignition system, or trigger the horn to begin honking, as a reminder that a payment is due.

The beauty of the system is that it allows a greater number of customers to purchase cars, even when their credit history looks poor. Rather than extensive up-front tests of the credit-worthiness of a potential purchaser, the system takes advantage of the ability to immobilise a car if repayments should cease. However, as Wired reports,

Texas Auto Center began fielding complaints from baffled customers the last week in February, many of whom wound up missing work, calling tow trucks or disconnecting their batteries to stop the honking. The troubles stopped five days later, when Texas Auto Center reset the Webtech Plus passwords for all its employee accounts… Then police obtained access logs from Pay Technologies, and traced the saboteur’s IP address to Ramos-Lopez’s AT&T internet service, according to a police affidavit filed in the case.

Omar Ramos-Lopez had lost his position at Texas Auto Center the previous month. Following good security practice, his own account on the Webtech Plus system had been disabled. However, it seems he gained access by using an account assigned to a different employee.

At first, the intruder targeted vehicles by searching on the names of specific customers. Then he discovered he could pull up a database of all 1,100 Auto Center customers whose cars were equipped with the device. He started going down the list in alphabetical order, vandalizing the records, disabling the cars and setting off the horns.

His manager ruefully remarked, “Omar was pretty good with computers”.

Hacking thermostats and lightbulbs

Finally, consider a surprise side-effect of attaching a new thermostat to a building. Modern thermostats exchange data with increasingly sophisticated systems that control heating, ventilation, and air conditioning. In turn, these systems can connect into corporate networks, which contain email archives and other confidential documents.

The Washington Chamber of Commerce discovered in 2011 that a thermostat in a townhouse they used was surreptitiously communicating with an Internet address somewhere in China. All the careful precautions of the Chamber’s IT department, including supervision of the computers and memory sticks used by employees, to guard against the possibility of such data seepage, was undone by this unexpected security vulnerability in what seemed to be an ordinary household object. Information that leaked from the Chamber potentially included sensitive information about US policy for trade with China, as well as other key IP (Intellectual Property).

It’s not only thermostats that have much greater network connectivity these days. Toasters, washing machines, and even energy-efficient lightbulbs contain surprising amounts of software, as part of the implementation of the vision of “smart homes”. And in each case, it opens the potential for various forms of espionage and/or extortion. Former CIA Director David Petraeus openly rejoiced in that possibility, in remarks noted in a March 2012 Wired article “We’ll spy on you through your dishwasher”:

Items of interest will be located, identified, monitored, and remotely controlled through technologies such as RFID, sensor networks, tiny embedded servers, and energy harvesters — all connected to the next-generation internet using abundant, low-cost, and high-power computing…

Transformational is an overused word, but I do believe it properly applies to these technologies, particularly to their effect on clandestine tradecraft.

To summarise: smart healthcare, smart cars, and smart homes, all bring new vulnerabilities as well as new benefits. The same is true for other fields of exponentially improving technology, such as 3D printing, unmanned aerial vehicles (“drones”), smart toys, and household robots.

The rise of robots

Sadly, malfunctioning robots have already been involved in a number of tragic fatalities. In May 2009, an Oerlikon MK5 anti-aircraft system was part of the equipment used by 5,000 South African troops in a large-scale military training exercise. On that morning, the controlling software suffered what a subsequent enquiry would call a “glitch”. Writing in the Daily Mail, Gavin Knight recounted what happened:

The MK5 anti-aircraft system, with two huge 35mm cannons, is essentially a vast robotic weapon, controlled by a computer.

While it’s one thing when your laptop freezes up, it’s quite another when it is controlling an auto-loading magazine containing 500 high-explosive rounds…

“There was nowhere to hide,” one witness stated in a report. “The rogue gun began firing wildly, spraying high explosive shells at a rate of 550 a minute, swinging around through 360 degrees like a high-pressure hose.”

By the time the robot has emptied its magazine, nine soldiers lie dead. Another 14 are seriously injured.

Deaths due to accidents involving robots have also occurred throughout the United States. A New York Times article in June 2014 gives the figure of “at least 33 workplace deaths and injuries in the United States in the last 30 years.” For example, in a car factory in December 2001,

An employee was cleaning at the end of his shift and entered a robot’s unlocked cage. The robot grabbed his neck and pinned the employee under a wheel rim. He was asphyxiated.

And in an aluminium factory in February 1996,

Three workers were watching a robot pour molten aluminium when the pouring unexpectedly stopped. One of them left to flip a switch to start the pouring again. The other two were still standing near the pouring operation, and when the robot restarted, its 150-pound ladle pinned one of them against the wall. He was killed.

To be clear, in none of these cases is there any suggestion of foul play. But to the extent that robots can be remotely controlled, the possibility arises for industrial vandalism.

Indeed, one of the most infamous cases of industrial vandalism (if that is the right description in this case) is the way in which the Stuxnet computer worm targeted the operation of fast-spinning centrifuges inside the Iranian programme to enrich uranium. Stuxnet took advantage of at least four so-called “zero-day security vulnerabilities” in Microsoft Windows software – vulnerabilities that Microsoft did not know about, and for which no patches were available. When the worm found itself installed on computers with particular programmable logic controllers (PLCs), it initiated a complex set of monitoring and alteration of the performance of the equipment attached to the PLC. The end result was that the centrifuges tore themselves apart, reportedly setting back the Iranian nuclear programme by a number of years.

Chillingly, what Stuxnet could do to centrifuges, variant software configurations could have similar effects on other industrial infrastructure – including energy and communication grids.

Therefore, whereas there is much to celebrate about the growing connectivity of “the Internet of Things”, there is also much to fear about it.

The scariest book

Many of the examples I’ve briefly covered above – the hacking of embedded medical devices, vehicle control systems, and thermostats and lightbulbs – as well as the upsides and downsides of “the rise of robots” – are covered in greater detail in a book I recently finished reading. The book is “Future Crimes”, by former LAPD police officer Marc Goodman. Goodman has spent the last twenty years working on cyber security risks with organisations such as Interpol, NATO, and the United Nations.

The full title of Goodman’s book is worth savouring: “Future Crimes: Everything is connected, everything is vulnerable, and what we can do about it.” Singularity 1on1 podcast interview Nikola Danaylov recently described Future Crimes as “the scariest book I have ever read in my life”. That’s a sentiment I fully understand. The book has a panoply of “Oh my god” moments.

What the book covers is not only the exponentially growing set of vulnerabilities that our exponentially connected technology brings in its wake, but also the large set of people who may well be motivated to exploit these vulnerabilities. This includes home and overseas government departments, industrial competitors, disgruntled former employees, angry former friends and spouses, ideology-fuelled terrorists, suicidal depressives, and a large subset of big business known as “Crime Inc”. Criminals have regularly been among the very first to adopt new technology – and it will be the same with the exploitation of new generations of security vulnerabilities.

There’s much in Future Crimes that is genuinely frightening. It’s not alone in the valuable task of raising public awareness of increasing security vulnerabilities. I also recommend Kim Zetter’s fine investigative work “Countdown To Zero Day: Stuxnet and the launch of the world’s first digital weapon”. Some of the same examples appear in both books, providing added perspective. In both cases the message is clear – the threats from cybersecurity are likely to mushroom.

On the positive front, technology can devise countermeasures as well as malware. There has long been an arms race between software virus writers and software antivirus writers. This arms race is now expanding into many new areas.

If the race is lost, it means that security will eat the world in a bad way: the horror stories that are told throughout both Future Crimes and Countdown To Zero Day will magnify in both number and scope. In that future scenario, people will look back fondly on the present day as a kind of innocent paradise, in which computers and computer-based systems generally worked reliably (despite occasional glitches). Safe, clean computer technology might become as rare as bottled oxygen in an environment where smog and pollution dominates – something that is only available in small quantities, to the rich and powerful.

If the race is won, there will still be losers. I’m not just referring to Crime Inc, and other would-be exploiters of security vulnerabilities, whose ambitions will be thwarted. I’m referring to all the companies whose software will fall short of the security standards of the new market leaders. These are companies who pay lip service to the importance of robust, secure software, but whose products in practice disappoint customers. By that time, indeed, customers will long have moved on from preferring dancing pigs to good security. The prevalence of bad news stories – in their daily social media traffic – will transform their appreciation of the steps they need to take to remain as safe as possible. Their priorities will have changed. They’ll be eagerly scouring reports as to which companies have world-class software security, and which companies, on the other hand, have products that should be avoided. Companies in the former camp will eat those in the latter camp.

Complications with software updates

As I mentioned above, there can be security vulnerabilities, not only intrinsic in a given piece of software, but also in how that software is used, deployed, and updated. I’ll finish this article by digging more deeply into the question of software updates. These updates have a particularly important role in the arms race between security vulnerabilities and security improvements.

Software updates are a key part of modern technological life. These updates deliver new functionality to users – such as a new version of a favourite app, or an improved user interface for an operating system. They also deliver security fixes, along with other bug fixes. In principle, as soon as possible after a major security vulnerability has been identified and analysed, the vendor will make available a fix to that programming error.

However, updates are something that many users dislike. On the one hand, they like receiving improved functionality. But they fear on the other hand that:

  • The upgrade will be time-consuming, locking them out of their computer systems at a time when they need to press on with urgent work
  • The upgrade will itself introduce new bugs, and break familiar patterns of how they use the software
  • Some of their applications will stop working, or will work in strange ways, after the upgrade.

The principle of “once bitten, twice shy” applies here. One bad experience with upgrade software – such as favourite add-on applications getting lost in the process – may prejudice users against accepting any new upgrades.

My own laptop recently popped up an invitation for me to reserve a free upgrade from its current operating system – Windows 7.1 – to the forthcoming Windows 10. I confess that I have yet to click the “yes, please reserve this upgrade” button. I fear, indeed, that some of the legacy software on my laptop (including apps that are more than ten years old, and whose vendors no longer exist) will become dysfunctional.

The Android operating system for smartphones faces a similar problem. New versions of the operating system, which include fixes to known security problems, often fail to make their way onto users of Android phones. In some cases, this is because the phones are running a reconfigured version of Android, which includes modifications introduced by a phone manufacturer and/or network operator. Any update has to wait until similar reconfigurations have been applied to the new version of the operating system – and that can take a long time, due to reluctance on the part of the phone manufacturer or network operator. In other cases, it’s simply because users decline to accept an Android upgrade when it is offered to them. Once bitten, twice shy.

Accordingly, there’s competitive advantage available, to any company that makes software upgrades as smooth and reliable as possible. This will become even more significant, as users grow in their awareness of the need to have security vulnerabilities in their computer systems fixed speedily.

But there’s a very awkward problem lurking around the upgrade process. Computer systems can sometimes be tricked into installing malicious software, whilst thinking it is a positive upgrade. In other words, the upgrade process can itself be hacked. For example, at the Black Hat conference in July 2009, IOActive security researcher Mike Davis demonstrated a nasty vulnerability in the software update mechanism in the smart electricity meters that were to be installed in homes throughout the Pacific North West of the United States.

For a riveting behind-the-scenes account of this particular research, see the book Countdown To Zero Day. In brief, Davis found a way to persuade a smart meter that it was being offered a software upgrade by a neighbouring, trusted smart meter, whereas it was in fact receiving software from an external source. This subterfuge was accomplished by extracting the same network encryption key that was hard-wired into every smart meter in the collection, and then presenting that encryption key as apparent (but bogus) evidence that the communication could be trusted. Once the meter had installed the upgrade, the new software could disable the meter from responding to any further upgrades. It could also switch off any electricity supply to the home. As a result, the electricity supplier would be obliged to send engineers to visit every single house that had been affected by the malware. In the simulated demo shown by Davis, this was as many as 20,000 separate houses within just a 24 hour period.

Uncharitably, we might think to ourselves that an electricity supplier is probably the kind of company to make mistakes with its software upgrade mechanism. As Mike Davis put it, “the guys that built this meter had a short-term view of how it would work”. We would expect, in contrast, that a company whose core business was software (and which had been one of the world’s leading software companies for several decades) would have no such glitches in its system for software upgrades.

Unexpectedly, one of the exploits utilised by Stuxnet team was a weakness in part of the Microsoft Update system – a part that had remained unchanged for many years. The exploit was actually used by a piece of malware, known as Flame which shared many characteristics with Stuxnet. Mikko Hyppönen, Chief Research Officer of Finnish antivirus firm F-Secure, reported the shocking news as follows in a corporate blogpost tellingly entitled “Microsoft Update and The Nightmare Scenario”:

About 900 million Windows computers get their updates from Microsoft Update. In addition to the DNS root servers, this update system has always been considered one of the weak points of the net. Antivirus people have nightmares about a variant of malware spoofing the update mechanism and replicating via it.

Turns out, it looks like this has now been done. And not by just any malware, but by Flame…

Flame has a module which appears to attempt to do a man-in-the-middle attack on the Microsoft Update or Windows Server Update Services system. If successful, the attack drops a file called WUSETUPV.EXE to the target computer.

This file is signed by Microsoft with a certificate that is chained up to Microsoft root.

Except it isn’t signed really by Microsoft.

Turns out the attackers figured out a way to misuse a mechanism that Microsoft uses to create Terminal Services activation licenses for enterprise customers. Surprisingly, these keys could be used to also sign binaries…

Having a Microsoft code signing certificate is the Holy Grail of malware writers. This has now happened.

Hyppönen’s article ends with some “good news in the bad news” which nevertheless sounds a strong alarm about similar things going wrong (with worse consequences) in the future:

I guess the good news is that this wasn’t done by cyber criminals interested in financial benefit. They could have infected millions of computers. Instead, this technique has been used in targeted attacks, most likely launched by a Western intelligence agency.

How not to be eaten

Despite the threats that I’ve covered above, I’m optimistic that software security and software updates can be significantly improved in the months and years ahead. In other words, there’s plenty of scope for improvements in the quality of software security.

One reason for this optimism is that I know that smart people have been thinking hard about these topics for many years. Good solutions are already available, ready for wider deployment, in response to stronger market readiness for such solutions.

But it will take more than technology to win this arms race. It will take political resolve. For too long, software companies have been able to ship software that has woefully substandard security. For too long, companies have prioritised dancing pigs over rock-hard security. They’ve written into their software licences that they accept no liability for problems arising from bugs in their software. They’ve followed, sometimes passionately, and sometimes half-heartedly, the motto from Facebook’s Mark Zuckerberg that software developers should “move fast and break things”.

That kind of behaviour may have been appropriate in the infancy of software. No longer.

Move fast and break things

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

15 February 2015

Ten years of quantified self

Filed under: books, healthcare — Tags: , , , , , , , — David Wood @ 12:02 am

Ten years. Actually 539 weeks. I’ve been recording my weight every morning from 23 October 2004, and adding a new data point to my chart every weekend.

10 years of Quantified Self

I’ve been recording my weight ever since I read that people who monitor their weight on a regular basis are more likely to avoid it ballooning upwards. There’s an instant feedback which allows me to seek adjustments in my personal health regime. With ten years of experience under my (varyingly-sized) belt, I’m strongly inclined to continue the experiment.

The above chart started life on my Psion Series 5mx PDA. Week after week, I added data, and watched as the chart expanded. Eventually, the graph hit the limits of what could be displayed on a single screen on the S5mx (width = 480 pixels), so I had to split the chart into two. And then three. Finally, after a number of hardware failures in my stock of S5mx devices, I transferred the data into an Excel spreadsheet on my laptop several months ago. Among other advantages, it once again lets me see the entire picture.

20150214_084625This morning, 14th Feb 2015, I saw the scales dip down to a point I had last reached in September 2006. This result seems to confirm the effectiveness of my latest dietary regime – which I’ve been following since July. Over these seven months, I’ve shrunk from a decidedly unhealthy (and unsightly) 97 kg down to 81 kg.

In terms of the BMI metric (Body Mass Index), that’s a reduction from 31.2 – officially “obese” – down to 26.4. 26.4 is still “marginally overweight”, since, for men, the top end of the BMI scale for a “healthy weight for adults” is 24.9. With my height, that would mean a weight of 77 kg. So there’s still a small journey for me to travel. But I’m happy to celebrate this incremental improvement!

The NHS page on BMI issues this sobering advice:

BMI of 30 or more: a BMI above 30 is classified as obese. Being obese puts you at a raised risk of health problems such as heart disease, stroke and type 2 diabetes. Losing weight will bring significant health improvements..

BMI score of 25 or more: your BMI is above the ideal range and this score means you may be overweight. This means that you’re heavier than is healthy for someone of your height. Excess weight can put you at increased risk of heart disease, stroke and type 2 diabetes. It’s time to take action…

As the full chart of my weight over the last ten years shows, I’ve had three major attempts at “action” to achieve a healthier body mass.

The first: For a while in 2004 and 2005, I restricted myself to two Herbalife meal preparations a day – even when I was travelling.

Later, in 2011, I ran across the book by Gary Taubes, “Why We Get Fat: And What to Do About It”, which made a great deal of sense to me. Taubes emphasises that some kinds of calories are more damaging to health than others. Specifically, carbohydrates, such as wheat, change the body metabolism to make it retain more weight. I also read “Wheat belly” by William Davis. Here’s an excerpt from the description of that book:

Renowned cardiologist William Davis explains how eliminating wheat from our diets can prevent fat storage, shrink unsightly bulges and reverse myriad health problems.

Every day we eat food products made of wheat. As a result millions of people experience some form of adverse health effect, ranging from minor rashes and high blood sugar to the unattractive stomach bulges that preventative cardiologist William Davis calls ‘wheat bellies’. According to Davis, that fat has nothing to do with gluttony, sloth or too much butter: it’s down to the whole grain food products so many people eat for breakfast, lunch and dinner.

After witnessing over 2,000 patients regain their health after giving up wheat, Davis reached the disturbing conclusion that wheat is the single largest contributor to the nationwide obesity epidemic – and its elimination is key to dramatic weight loss and optimal health.

In Wheat Belly, Davis exposes the harmful effects of what is actually a product of genetic tinkering being sold to the public as ‘wheat’ and provides readers with a user-friendly, step-by-step plan to navigate a new, wheat-free lifestyle. Benefits include: substantial weight loss, correction of cholesterol abnormalities, relief from arthritis, mood benefits and prevention of heart disease.

As a result, I cut back on carbohydrates – and was pleased to see my weight plummet once again. For a while – until I re-acquired many of my former carb-enjoying habits, whoops.

That takes me to regime number three. This time, I’ve followed the more recent trend known as “5+2”. According to this idea, people can eat normally for, say, five days in the week, and then eat a very reduced amount of calories on the other two days (known as “fasting days”). My initial worry about this approach was that I wasn’t sure I’d eat sensible foods on the two low-calorie days.

That’s when I ran across the meal preparations of the LighterLife company. These include soups, shakes, savoury meals, porridge, and bars. Each of these meals is just 150-200 calories. LighterLife suggest that people eat, on their low-calorie days, four of these meals. These preparations include sufficient proteins, fibre, and 100% of the recommended daily intake of key vitamins and minerals.

To be clear, I am not a medical doctor, and I urge anyone who is considering adopting a diet to obtain their own medical advice. I also recognise that different people have different metabolisms, so a diet that works for one person won’t necessarily work for someone else. However, I can share my own personal experience, in case it inspires others to do their own research:

  • Instead of 5+2, I generally follow 3+4. That is, I have four low-calorie days each week, along with three other days in which I tend to indulge myself (except that, on these other days, I still try to avoid consuming too many carbs, such as wheat, bread, rice, and potatoes)
  • On the low-calorie days, I generally eat around 11.30am, 2.30pm, 5.30pm, and 8.30pm
  • If I’m working at home, I’ll include soups, a savoury meal, and shakes; if I’m away from home, I’ll eat three (or four) different bars, that I pack into my back-pack at the beginning of the day
  • On the low-calorie days, it’s important to drink as well as to eat, but I avoid any drinks with calories in them. In practice, I find drinks of herbal teas to be very effective at dulling any sense of hunger I’m experiencing
  • In addition to eating less, I continue to do a lot of walking (e.g. between Waterloo Station and meeting locations in Central London), as well as other forms of exercise (like on the golf driving range or golf course).

Note: I know that BMI is far from being a complete representation of personal healthiness. However, I view it as a good starting point.

To round off my recommendations for diet-related books that I have particularly enjoyed reading, I’ll add “Mindless eating” by Brian Wansink to the two I mentioned earlier. I listened to the Audible version of that book. It’s hilarious, but thought-provoking, and the research it describes seems very well founded:

Every day, we each make around 200 decisions about eating. But studies have shown that 90% of these decisions are made without any conscious choice. Dr Brian Wansink lays bare the facts about our true eating habits to show that awareness of our patterns can allow us to lose weight effectively and without serious changes to our lives. Dr Wansink’s revelations include:

  • Food mistakes we all make in restaurants, supermarkets and at home
  • How we are manipulated by brand, appearance and parental habits more than price and our choices
  • Our emotional relationship with food and how we can overcome it to revitalise our diets.

Forget calorie counting and starving yourself and learn the truth about why we overeat in this fascinating, innovative guide.

Three books

I’ll finish by thanking my friends, family, and colleagues for their gentle and thoughtful encouragement, over the years, for me to keep an eye on my body mass, and on the general goodness of what I eat. “Health is the first wealth”.

7 September 2014

Beyond ‘Smartphones and beyond’

You techno-optimists don’t understand how messy real-life projects are. You over-estimate the power of technology, and under-estimate factors such as sociology, psychology, economics, and biology – not to mention the cussed awkwardness of Murphy’s Law.

That’s an example of the kind of retort that has frequently come to my ears in the last few years. I have a lot of sympathy for that retort.

I don’t deny being an optimist about what technology can accomplish. As I see things:

  • Human progress has taken place by the discovery and adoption of engineering solutions – such as fire, the wheel, irrigation, sailing ships, writing, printing, the steam engine, electricity, domestic kitchen appliances, railways and automobiles, computers and the Internet, plastics, vaccinations, anaesthetic, contraception, and better hygiene
  • Forthcoming technological improvements can propel human experience onto an even higher plane – with our minds and bodies both being dramatically enhanced
  • As well as making us stronger and smarter, new technology can help us become kinder, more collaborative, more patient, more empathetic, less parochial, and more aware of our cognitive biases and blindspots.

But equally, I see lots of examples of technology failing to live up to the expectations of techno-optimists. It’s not just that technology is a two-edged sword, and can scar as well as salve. And it’s not just that technology is often mis-employed in search of a “techno-solution” when a piece of good old-fashioned common sense could result in a better approach. It’s that new technologies – whether ideas for new medical cures, new sustainable energy sources, improved AI algorithms, and so on – often take considerably longer than expected to create useful products. Moreover, these products often have weaker features or poorer quality than anticipated.

Here’s an example of technology slowdown. A 2012 article in Nature coined the clever term “Eroom’s Law” to describe a steady decline in productivity of R&D research in new drug discovery:

Diagnosing the decline in pharmaceutical R&D efficiency

Jack W. Scannell, Alex Blanckley, Helen Boldon & Brian Warrington

The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms.

In other words, although the better-known Moore’s Law describes a relatively steady increase in computational power, Eroom’s Law describes a relatively steady decrease in the effectiveness of research and development within the pharmaceutical industry. By the way, Eroom isn’t a person: it’s Moore spelt backwards.

The statistics are bleak, as can be seen in a graph from Derek Lowe’s In the pipeline blog:

R&D trend

But despite this dismal trend, I still see plenty of reason for measured optimism about the future of technology. That’s despite the messiness of real-world projects, out-dated regulatory and testing systems, perverse incentive schemes, institutional lethargy, and inadequate legacy platforms.

This measured optimism comes to the surface in the later stages of the book I have just e-published, at the end of a two-year period of writing it. The book is entitled Smartphones and beyond: lessons from the remarkable rise and fall of Symbian.

As I write in the opening chapter of that book (an excerpt is available online):

The story of the evolution of smartphones is fascinating in its own right – for its rich set of characters, and for its colourful set of triumphs and disasters. But the story has wider implications. Many important lessons can be drawn from careful review of the successes and, yes, the failures of the smartphone industry.

When it comes to the development of modern technology, things are rarely as simple as they first appear. Some companies bring great products to the market, true. These companies are widely lauded. But the surface story of winners and losers can conceal many twists and turns of fortune. Behind an apparent sudden spurt of widespread popularity, there frequently lies a long gestation period. The eventual blaze of success was preceded by the faltering efforts of many pioneers who carved new paths into uncertain terrain. The steps and missteps of these near-forgotten pioneers laid the foundation for what was to follow.

So it was for smartphones. It is likely to be the same with many of the other breakthrough technologies that have the potential to radically transform human experience in the decades ahead. They are experiencing their missteps too.

I write this book as an ardent fan of the latent power of modern technology. I’ve seen smartphone technology playing vital roles in the positive transformation of human experience, all over the world. I expect other technologies to play even more radical roles in the near future – technologies such as wearable computing, 3D printing, synthetic biology, nanotechnology, neuro-enhancement, rejuvenation biotech, artificial intelligence, and next generation robotics. But, as with smartphones, there are likely to be many disappointments en route to eventual success. Indeed, even the “eventual success” cannot be taken for granted.

General principles about the progress of complex technology emerge from reflecting on the details of actual examples. These details – the “warts and all”, to use the phrase attributed to Oliver Cromwell – can confound naive notions as to how complex technology should be developed and applied. As I’ll show from specific examples in the chapters ahead, the details show that failure and success often co-exist closely within the same project. A single project often contains multiple layers, belonging to numerous different chains of cause and effect.

It is my sincere hope that an appreciation of real-world examples of these multiple layers of smartphone development projects will enable a better understanding of how to guide the future evolution of other forms of smart technology. I’ll describe what I call “the core smartphone skillset”, comprising excellence in the three dimensions of “platforms”, “marketing”, and “execution”. To my mind, these are the key enablers of complex technological progress. These enablers have a critical role to play for smartphones, and beyond. Put together well, these enablers can climb mountains.

I see the core smartphone skillset as having strong applicability in wider technological areas. That skillset provides the basis for overcoming the various forms of inertia which are holding back the creation of important new solutions from emerging technologies. The existence of that skillset underlies my measured optimism in the future.

But there’s nothing inevitable about how things will turn out. The future holds many potential scenarios, with varying degrees of upside and downside. The question of which scenarios will become actual, depends on inspired human vision, choice, action, and follow-through. Fortune sometimes hinges on the smallest of root causes. Effects can then cascade.

Hits and misses

As well as the description of the core smartphone skillset” – which I see as having strong applicability in wider technological areas – the book contains my thoughts as the things that Symbian did particularly well over the years, resulting in it becoming the leading smartphone operating system for many years in the first decade of this century:

  1. Investors and supporters who were prepared to take a long-term view of their investments
  2. Regular deliveries against an incremental roadmap
  3. Regularly taking the time to improve the architecture of the software and the processes by which it was delivered
  4. High calibre software development personnel
  5. Cleanly executed acquisitions to boost the company’s talent pool
  6. Early and sustained identification of the profound importance of smartphones
  7. Good links with the technology foresight groups and other roadmap planning groups within a range of customers
  8. A product that was open to influence, modification, and customisation by customers
  9. Careful attention given to enabling an ecosystem of partners
  10. An independent commercial basis for the company, rather than it being set up as a toothless “customers’ cooperative”
  11. Enabling competition.

Over the course of that time, Symbian:

  • Opened minds as to what smartphones could accomplish. In particular, people realised that there was much more they could do with mobile phones, beyond making phone calls. This glimpse encouraged other companies to enter this space, with alternative smartphone platforms that achieved, in the end, considerably greater success
  • Developed a highly capable touch UI platform (UIQ), years before Android/iPhone
  • Supported a rich range of different kinds of mobile devices, all running versions of the same underlying software engine; in particular, Symbian supported the S60 family of devices with its ergonomically satisfying one-handed operating mode
  • Achieved early demonstrations of breakthrough capabilities for mobile phones, including streaming multimedia, smooth switching between wifi and cellular networks, maps with GPS updates, the use of a built-in compass and accelerometer, and augmented reality – such as in the 2003 “Mozzies” (“Mosquitos”) game for the Siemens SX1
  • Powered many ground-breaking multimedia smartphones, imaging smartphones, business smartphones, and fashion smartphones
  • Achieved sales of some 500 million units – with the majority being shipped by Nokia, but with 40 million being shipped inside Japan from 2003 onwards, by Fujitsu, Sharp, Mitsubishi, and Sony Ericsson
  • Held together an alliance of competitors, among the set of licensees and partners of Symbian, with the various companies each having the opportunity to benefit from solutions initially developed with some of their competitors in mind
  • Demonstrated that mobile phones could contain many useful third party applications, without at the same time becoming hotbeds of viruses
  • Featured in some of the best-selling mobile phones of all time, up till then, such as the Nokia 5230, which sold 150 million units.

Alongside the list of “greatest hits”, the book also contains a (considerably longer) list of “greatest misses”, “might-have-beens”, and alternative histories. The two lists are distilled from wide-ranging “warts and all” discussions in earlier chapters of the book, featuring many excerpts from my email and other personal archives.

LFS cover v2

To my past and present colleagues from the Symbian journey, I offer my deep thanks for all their contributions to the creation of modern smartphones. I also offer my apologies for cases when my book brings back memories of episodes that some participants might prefer to forget. But Symbian’s story is too important to forget. And although there is much to regret in individual actions, there is much to savour in the overall outcome. We can walk tall.

The bigger picture now is that other emerging technology sectors risk repeating the stumbles of the smartphone industry. Whereas the smartphone industry recovered from its early stumbles, these other industries might not be so fortunate. They may die before they get off the ground. Their potential benefits might remain forever out of grasp, or be sorely delayed.

If the unflattering episodes covered in Smartphones and beyond can help increase the chance of these new technology sectors addressing real human need quickly, safely, and fully, then I believe it will be worth all the embarrassment and discomfort these episodes may cause to Symbian personnel – me included. We should be prepared to learn from one of the mantras of Silicon Valley: “embrace failure”. Reflecting on failure can provide the launchpad for greater future success, whether in smartphones, or beyond.

Early reviewers of the book have commented that the book is laden with lessons, from the pioneer phase of the smartphone industry, for the nascent technology sectors where they are working – such as wearable computing, 3D printing, social robots, and rejuvenation biotechnology. The strength of these lessons is that they are presented, in this book, in their multi-dimensional messiness, with overlapping conflicting chains of cause and effect, rather than as cut-and-dried abstracted principles.

In that the pages of Smartphones and beyond, I do choose to highlight some specific learnings from particular episodes of smartphone success or smartphone failure. Some lessons deserve to be shouted out. For other episodes, I leave it to readers to reach their own conclusions. In yet other cases, frankly, it’s still not clear to me what lessons should be drawn. Writers who follow in my tracks will no doubt offer their own suggestions.

My task in all these cases is to catalyse a discussion, by bringing stories to the table that have previously lurked unseen or under-appreciated. My fervent hope is that the discussion will make us all collectively wiser, so that emerging new technology sectors will proceed more quickly to deliver the profound benefits of which they are capable.

Some links

For an extended series of extracts from the different chapters in Smartphones and beyond, see the official website for the book.

The book is available for Kindle download from Amazon: UK site and International (US) site.

  • Note that readers without Kindle devices can read the book on a convenient app on their PC or tablet (or smartphone!) – these apps are freely available.

I haven’t created a hard-copy print version. The book would need to be split into three parts to make it physically convenient. Far better, in my view, to be able to carry the book on a light electronic device, with “search” and “bookmark” facilities that very usefully augment the reading experience.

Opportunities to improve

Smartphones and beyond no doubt still contains a host of factual mistakes, errors in judgement, misattributions, personal biases, blind spots, and other shortcomings. All these faults are the responsibility of the author. To suggest changes, either in an updated edition of this book or in some other follow-up project, please get in touch.

Where the book includes copies of excerpts from Internet postings, I have indicated the online location where the original article could be found at the time of writing. In case an article has moved or been deleted since then, it can probably be found again via search engines or the Internet archive, https://archive.org/. If I have inadvertently failed to give due credit to an original writer, or if I have included more text than the owner of the original material wishes, I will make amends in a later edition, upon reasonable request. Quoted information where no source is explicitly indicated is generally taken from copies of my emails, memos in my electronic diary, or other personal archives.

One of the chapters of this book is entitled “Too much openness”. Some readers may feel I have, indeed, been too open with some of the material I have shared. However, this material is generally at least 3-5 years old. Commercial lines of business no longer depend on it remaining secret. So I have applied a historian’s licence. We can all become collectively wiser by discussing it now.

Footnote

Finally, one other apology is due. As I’ve given my attention over the last few months to completing Smartphones and beyond, I’ve deprioritised many other tasks, and have kept colleagues from various important projects waiting for longer than they expected. I can’t promise that I’ll be able to pick up all these other pieces quickly again – that kind of overcommitment is one of the failure modes discussed throughout Smartphones and beyond. But I feel like I’m emerging for a new phase of activity – “Beyond ‘Smartphones and Beyond'”.

To help transition to that new phase, I’ve moved my corporate Delta Wisdom website to a new format (WordPress), and rejigged what had become rather stale content. It’s time for profound change.

Banner v6

 

30 January 2014

A brilliant example of communication about science and humanity

Mathematical Universe

Do you enjoy great detective puzzles? Do you like noticing small anomalies, and turning them into clues to an unexpected explanation? Do you like watching world-class scientists at work, piecing together insights to create new theories, and coping with disappointments when their theories appear to be disproved?

In the book “Our mathematical universe”, the mysteries being addressed are some of the very biggest imaginable:

  • What is everything made out of?
  • Where does the universe come from? For example, what made the Big Bang go “bang”?
  • What gives science its authority to speak with so much confidence about matters such as the age and size of the universe?
  • Is it true that the constants of nature appear remarkably “fine-tuned” so as to allow the emergence of life – in a way suggesting a miracle?
  • What does modern physics (including quantum mechanics) have to teach us about mind and consciousness?
  • What are the chances of other intelligent life existing in our galaxy (or even elsewhere in our universe)?
  • What lies in the future of the human race?

The author, Max Tegmark, is a Swedish-born professor of physics at MIT. He’s made a host of significant contributions to the development of cosmology – some of which you can read about in the book. But in his book, he also shows himself in my view to be a first class philosopher and a first class communicator.

Indeed, this may be the best book on the philosophy of physics that I have ever read. It also has important implications for the future of humanity.

There are some very big ideas in the book. It gives reasons for believing that our universe exists alongside no fewer than four different types of parallel universes. The “level 4 multiverse” is probably one of the grandest conceptions in all of philosophy. (What’s more, I’m inclined to think it’s the correct description of reality. At its heart, despite its grandness, it’s actually a very simple theory, which is a big plus in its favour.)

Much of the time, the writing in the book is accessible to people with pre-university level knowledge of science. On occasion, the going gets harder, but readers should be able to skip over these sections. I recommend reading the book all the way through, since the last chapter contains many profound ideas.

I think you’ll like this book if:

  • You have a fondness for pure mathematics
  • You recognise that the scientific explanation of phenomenon can be every bit as uplifting as pre-scientific supernatural explanations
  • You are ready to marvel at the ingenuity of scientific investigators going all the way back to the ancient Greeks (including those who first measured the distance from the Earth to the Sun)
  • You are critical of “quantum woo woo” hand-waving that says that quantum mechanics proves that consciousness is somehow a non-local agent (and that minds will survive bodily death)
  • You want to find more about Hugh Everett, the physicist who first proposed that “the quantum wave function never collapses”
  • You have a hunch that there’s a good answer to the question “why is there something rather than nothing?”
  • You want to see scientists in action, when they are confronted by evidence that their favoured theories are disproved by experiment
  • You’re ready to laugh at the misadventures that a modern cosmologist experiences (including eminent professors falling asleep in the audience of his lectures)
  • You’re interested in the considered viewpoint of a leading scientist about matters of human existential risk, including nuclear wars and the technological singularity.

Even more than all these good reasons, I highlight this book as an example of what the world badly needs: clear, engaging advocacy of the methods of science and reason, as opposed to mysticism and obscurantism.

Footnote: For my own views about the meaning of quantum mechanics, see my earlier blogpost “Schrödinger’s Rabbits”.

13 January 2014

Six steps to climate catastrophe

In a widely read Rolling Stone article from July 2012, “Global Warming’s Terrifying New Math”, Bill McKibben introduced what he called

Three simple numbers that add up to global catastrophe.

The three numbers are as follows:

  1. 2 degrees Celsius – the threshold of average global temperature rise “which scientists (and recently world leaders at the G8 summit) have agreed we must not cross, for fear of triggering climate feedbacks which, once started, will be almost impossible to stop and will drive accelerated warming out of our control”
  2. 565 Gigatons – the amount of carbon dioxide that can be added into the atmosphere by mid-century with still an 80% chance of the temperature rise staying below two degrees
  3. 2,795 Gigatons“the amount of carbon already contained in the proven coal and oil and gas reserves of the fossil-fuel companies, and the countries (think Venezuela or Kuwait) that act like fossil-fuel companies. In short, it’s the fossil fuel we’re currently planning to burn”.

As McKibben highlights,

The key point is that this new number – 2,795 – is higher than 565. Five times higher.

He has a vivid metaphor to drive his message home:

Think of two degrees Celsius as the legal drinking limit – equivalent to the 0.08 blood-alcohol level below which you might get away with driving home. The 565 gigatons is how many drinks you could have and still stay below that limit – the six beers, say, you might consume in an evening. And the 2,795 gigatons? That’s the three 12-packs the fossil-fuel industry has on the table, already opened and ready to pour.

We have five times as much oil and coal and gas on the books as climate scientists think is safe to burn. We’d have to keep 80 percent of those reserves locked away underground to avoid that fate. Before we knew those numbers, our fate had been likely. Now, barring some massive intervention, it seems certain.

He continues,

Yes, this coal and gas and oil is still technically in the soil. But it’s already economically above ground – it’s figured into share prices, companies are borrowing money against it, nations are basing their budgets on the presumed returns from their patrimony. It explains why the big fossil-fuel companies have fought so hard to prevent the regulation of carbon dioxide – those reserves are their primary asset, the holding that gives their companies their value. It’s why they’ve worked so hard these past years to figure out how to unlock the oil in Canada’s tar sands, or how to drill miles beneath the sea, or how to frack the Appalachians.

The burning question

bqcoverbig

A version of Bill McKibben’s Global Warming’s Terrifying New Math essay can be found as the foreword to the recent book “The Burning Question” co-authored by Duncan Clark and Mike Berners-Lee. The subtitle of the book has a somewhat softer message than in the McKibben essay:

We can’t burn half the world’s oil, coal, and gas. So how do we quit?

But the introduction makes it clear that constraints on our use of fossil fuel reserves will need to go deeper than “one half”:

Avoiding unacceptable risks of catastrophic climate change means burning less than half of the oil, coal, and gas in currently commercial reserves – and a much smaller fraction of all the fossil fuels under the ground…

Notoriously, climate change is a subject that is embroiled in controversy and intemperance. The New York Times carried an opinion piece, “We’re All Climate-Change Idiots” containing this assessment from Anthony Leiserowitz, director of the Yale Project on Climate Change Communication:

You almost couldn’t design a problem that is a worse fit with our underlying psychology.

However, my assessment of the book “The burning question” by Berners-Lee and Clark is that it is admirably objective and clear. That impression was reinforced when I saw Duncan Clark speak about the contents of the book at London’s RSA a couple of months ago. On that occasion, the meeting was constrained to less than an hour, for both presentation and audience Q&A. It was clear that the speaker had a lot more that he could have said.

I was therefore delighted when he agreed to speak on the same topic at a forthcoming London Futurists event, happening in Birkbeck College from 6.15pm to 8.30pm on Saturday 18th January. You can find more details of the London Futurists event here. Following our normal format, we’ll have a full two hours of careful examination of the overall field.

Six steps to climate catastrophe

One way to examine the risks of climate catastrophe induced by human activity is to consider the following six-step chain of cause and effect:

  1. Population – the number of people on the earth
  2. Affluence – the average wealth of people on the earth
  3. Energy intensity – the average amount of energy used to create a unit of wealth
  4. Carbon intensity – the average carbon emissions caused by each unit of energy
  5. Temperature impact – the average increase of global temperature caused by carbon emissions
  6. Global impact – the broader impact on life on earth caused by increased average temperature.

Six steps

As Berners-Lee and Clark discuss in their book, there’s scope to debate, and/or to alter, each of these causal links. Various commentators recommend:

  • A reduction in the overall human population
  • Combatting society’s deep-seated imperatives to pursue economic growth
  • Achieving greater affluence with less energy input
  • Switching to energy sources (such as “renewables”) with reduced carbon emissions
  • Seeing (or engineering) different causes that complicate the relation between carbon emissions and temperature rises
  • Seeing (or engineering) beneficial aspects to global increases in temperature, rather than adverse ones.

What they point out, however, is that despite significant progress to reduce energy intensity and carbon intensity, the other factors seem to be increasing out of control, and dominate the overall equation. Specifically, affluence shows no signs of decreasing, especially when the aspirations of huge numbers of people in emerging economies are taken into consideration.

I see this as an argument to accelerate work on technical solutions – further work to reduce the energy intensity and carbon intensity factors. I also see it as an argument to rapidly pursue investigations of what Berners-Lee and Clark call “Plan B”, namely various forms of geoengineering. This extends beyond straightforward methods for carbon capture and storage, and includes possibilities such as

  • Trying to use the oceans to take more carbon dioxide out of the air and store it in an inert form
  • Screen some of the incoming heat from the sun, by, for example, creating more clouds, or injecting aerosols into the upper atmosphere.

But Berners-Lee and Clark remain apprehensive about one overriding factor. This is the one described earlier: the fact that so much investment is tied up in the share-prices of oil companies that assume that huge amounts within the known reserves of fossil fuels will all be burnt, relatively soon. Providing better technical fixes will, they argue, be insufficient to prevent the ongoing juggernaut steamroller of conversion from fossil fuels into huge cash profits for industry – a juggernaut with the side-effect of accumulated carbon emissions that increase the risk of horrendous climate consequences.

For this reason, they see the need for concerted global action to ensure that the prices being paid for the acquisition and/or consumption of fossil fuels fully take into account the downside costs to the global environment. This will be far from easy to achieve, but the book highlights some practical steps forwards.

Waking up

The first step – as so often, in order to succeed in a complex change project – is to engender a sustained sense of urgency. Politicians won’t take action unless there is strong public pressure for action. This public pressure won’t exist whilst people remain in a state of confusion, disinterest, dejection, and/or helplessness. Here’s an extract from near the end of their book:

It’s crucial that more people hear the simple facts loud and clear: that climate change presents huge risks, that our efforts to solve it so far haven’t worked, and that there’s a moral imperative to constrain unabated fossil fuel use on behalf of current and especially future generations.

It’s often assumed that the world isn’t ready for this kind of message – that it’s too negative or scary or confrontational. But reality needs facing head on – and anyhow the truth may be more interesting and inspiring than the watered down version.

I expect many readers of this blogpost to have questions in their mind – or possibly objections (rather than just questions) – regarding at least some of what’s written above. This topic deserves a 200 page book rather than just a short blogpost.

Rather than just urging people to read the book in question, I have set up the London Futurists event previously mentioned. I am anticipating robust but respectful in-depth discussion.

Beyond technology

One possible response is that the acceleration of technological solutions will deliver sufficient solutions (e.g. reducing energy intensity and carbon intensity) long before we need to worry about the climate reaching any tipping point. Solar energy may play a decisive role – possibly along with new generations of nuclear power technology.

That may turn out to be true. But my own engineering experience with developing complex technological solutions is that the timetable is rarely something that anyone can be confident about in advance. So yes, we need to accelerate the technology solutions. But equally, as an insurance policy, we need to take actions that will buy ourselves more time, in order for these technological solutions to come to full fruition. This insurance policy inevitably involves the messy worlds of politics and economics, alongside the developments that happen in the technological arena.

This last message comes across uncomfortably to people who dislike any idea of global coordinated action in politics or economics. People who believe in “small government” and “markets as free as possible” don’t like to contemplate global scale political or economic action. That is, no doubt, another reason why the analysis of global warming and climate change is such a contentious issue.

22 December 2013

A muscular new kid on the block

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man. – George Bernard Shaw, “Man and Superman”, 1903

How far should we go, to be the best that we can be? If personal greatness lies at the other side of an intense effort, should we strain every muscle, muster every personal resource, and vigorously push away every distraction, in order to seize that crown?

For example, should we accept the “Transhumanist Wager”, as dramatically portrayed in the trenchant new novel of the same name by former world-traveller and award-winning National Geographic journalist Zoltan Istvan?

The-Transhumanist-Wager-e1368458616371The book, which hit the #1 best-seller spot in Amazon a few months back (in both Philosophy and Science Fiction Visionary and Metaphysical), is a vivid call to action. It’s a call for people around the world to waken up to the imminent potential for a radical improvement in the human condition. The improvement can be earned by harnessing and accelerating ongoing developments in medicine, engineering, and technology.

However, in the nightmare near-future world portrayed in the novel, that improvement will require an intense effort, since the seats of global power are resolutely opposed to any potential for dramatic, human-driven improvement.

For example, under the influence of what the novel calls “a rogue group of right-wing politicians – those who considered Sunday church a central part of their existence”, the US government passes sweeping laws forbidding experimentation in stem cell therapies, genetic reprogramming, human enhancement, and life-extension. Istvan puts into the mouth of the President of the United States the soporific remarks, “Good old-fashioned, basic health, that’s what the people really want”.

That ambition sounds… reasonable, yet it falls far, far short of the potential envisioned by the hero of the novel, Jethro Knights. He has much bigger sights: “My words define a coming new species”.

Anyone reading “The Transhumanist Wager” is likely to have strong reactions on encountering Jethro Knights. Knights may become one of the grand characters of modern fiction. He challenges each of us to rethink how far each of us would be prepared to go, to become the best that we can be. Knights brazenly talks about himself as an “omnipotender”: “an unyielding individual whose central aim is to contend for as much power and advancement as he could achieve, and whose immediate goal is to transcend his human biological limitations in order to reach a permanent sentience”. Throughout the novel, his actions match his muscular philosophy. I read it with a growing mix of horror and, yes, admiration.

The word “wager” in the book’s title recalls the infamous “Pascal’s Wager”. French philosopher and mathematician Blaise Pascal argued in the 17th century that since there was a possibility that God existed, with the power to bestow on believers “an infinitely happy life”, we should take steps to acquire the habit of Christian belief: the potential upsides far outweigh any downsides. Belief in God, according to Pascal, was a wager worth taking. However, critics have long observed that there are many “possible” Gods, each of whom seems to demand different actions as indicators of our faith; the wager alone is no guide as to the steps that should be taken to increase the chance of “an infinitely happy life”.

The transhumanist wager observes, analogously, that there is a possibility that in the not-too-distant future, science and technology will have the ability to bestow on people, if not an “infinitely happy” life, a lifestyle that is hugely expanded and enhanced compared to today’s. Jethro Knights expounds the consequence:

The wager… states that if you love life, you will safeguard that life, and strive to extend and improve it for as long as possible. Anything else you do while alive, any other opinion you have, any other choice you make to not safeguard, extend, and improve that life, is a betrayal of that life…

This is a historic choice that each man and woman on the planet must make. The choice shall determine the rest of your life and the course of civilisation.

Knights is quite the orator – and quite a fighter, too. As the novel proceeds to its climactic conclusion, Knights assembles like-minded scientists and engineers who create a formidable arsenal of remote-controlled weaponry – robots that can use state-of-the-art artificial intelligence to devastating effect. The military stance is needed, in response to the armed forces which the world’s governments are threatening to deploy against the maverick new entity of “Transhumania” – a newly built seasteading nation of transhumanists – which Knights now leads.

It is no surprise that critics of the book have compared Jethro Knights to Joseph Stalin. These criticisms come from within the real-world transhumanist community that Istvan might have counted to rally around the book’s call to action. Perhaps these potential allies were irritated by the description of mainstream transhumanists that appears in the pages of the book: “an undersized group of soft-spoken individuals, mostly aged nerds trying to gently reshape their world… their chivalry and sense of embedded social decency was their downfall”.

I see four possible objections to the wager that lies at the heart of this novel – and to any similar single-minded undertaking to commit whole-heartedly to a methodology of personal transcendence:

  1. First, by misguidedly pursing “greatness”, we might lose grasp of the “goodness” we already possess, and end up in a much worse place than before.
  2. Second, instead of just thinking about our own personal advancement, we have important obligations to our families, loved ones, and our broader social communities.
  3. Third, by being overly strident, we may antagonise people and organisations who could otherwise be our allies.
  4. Fourth, we may be wrong in our analysis of the possibility for future transcendence; for example, faith in science and technology may be misplaced.

Knights confronts each of these objections, amidst the drama to establish Transhumania as his preferred vehicle to human transcendence. Along the way, the novel features other richly exaggerated larger-than-life characters embodying key human concerns – love, spirituality, religion, and politics – who act as counters to Knights’ own headstrong ambitions. Zoe Bach, the mystically inclined physician who keeps spirituality on the agenda, surely speaks for many readers when she tells Knights she understands his logic but sees his methods as not being realistic – and as “not feeling right”.

The book has elements that highlight an uplifting vision for what science and technology can achieve, freed from the meddling interference of those who complain that “humans shouldn’t play at being God”. But it also serves as an awful warning for what might ensue if forces of religious fundamentalism and bio-conservatism become increasingly antagonised, rather than inspired, by the transformational potential of that science and technology.

My takeaway from the book, therefore, is to work harder at building bridges, rather than burning them. We will surely need these bridges in the troubled times that lie ahead. That is my own “transhumanist wager”.

Postscripts

1.) A version of the above essay currently features on the front-page of the online Psychology Today magazine.

DW on front cover2.) If you can be in San Francisco on 1st February, you can see Zoltan Istvan, the author of the Transhumanist Wager, speaking the conference “Transhuman Visions” organised by Brighter Brains:

Transhuman-Visions2-791x10243.) I recently chaired a London Futurists Hangout On Air discussion on The Transhumanist Wager. The panelists, in addition to Zoltan Istvan, were Giulio PriscoRick Searle, and Chris T. Armstrong. You can view the recording of the discussion below. But to avoid spoiling your enjoyment of the book, you might prefer to read the book before you delve into the discussion.

« Newer PostsOlder Posts »

Blog at WordPress.com.