My day job at Symbian is, in effect, to ensure that my colleagues in the management team don’t waken up to some surprising news one morning and say, “Why didn’t we see this coming?“. That is, I have to anticipate so-called “Predictable surprises“. Drawing on insight from both inside and outside of the company, I try to keep my eye on emerging disruptive trends in technology, markets, and society, in case these trends have the potential to reach some kind of tipping point that will significantly impact Symbian’s success (for good, or for ill). And once I’ve reached the view that a particular trend deserves closer attention, it’s my job to ensure that the company does devote sufficient energy to it – in sufficient time to avoid being “taken by surprise”.
For the last few days, I’ve pursued my interest in disruptive trends some way outside the field of smartphones. I booked a holiday from work in order to attend the conference on Global Catastrophic Risks that’s been held at Oxford University’s James Martin 21st Century School.
Instead of just thinking about trends that could destabilise smartphone technology and smatphone markets, I’ve been immersed in discussions about trends that could destabilise human technology and markets as a whole – perhaps even to the extent of ending human civilisation. As well as the more “obvious” global catastrophic risks like nuclear war, nuclear terrorism, global pandemics, and runaway climate change, the conference also discussed threats from meteor and comet impacts, gamma ray bursts, bioterrorism, nanoscale manufacturing, and super-AI.
Interesting (and unnerving) as these individual discussions were, what was even more thought-provoking was the discussion on general obstacles to clear-thinking about these risks. We all suffer from biases in our thinking, that operate at both individual and group levels. These biases can kick into overdrive when we begin to comtemplate global catastrophes. No wonder some people get really hot and bothered when these topics are discussed, or else suffer strong embarrassment and seek to change the topic. Eliezer Yudkowsky considered one set of biases in his presentation “Rationally considering the end of the world“. James Hughes covered another set in “Avoiding Millennialist Cognitive Biases“, as did Jonathan Wiener in “The Tragedy of the Uncommons” and Steve Rayner in “Culture and the Credibility of Catastrophe“. There were also practical examples of how people (and corporations) often misjudge risks, in both “Insurance and catastrophes” by Peter Taylor and “Probing the Improbable. Methodological Challenges for Risks with Low Probabilities and High Stakes” by Toby Order and co-workers.
So what can we do, to set aside biases and get a better handle on the evaluation and prioritisation of these existential risks? Perhaps the most innovative suggestion came in the presentation by Robin Hanson, “Catastrophe, Social Collapse, and Human Extinction“. Robin is one of the pioneers of the notion of “Prediction markets“, so perhaps it is no surprise that he floated the idea of markets in tickets to safe refuges where occupants would have a chance of escaping particular global catastrophes. Some audience members appeared to find the idea distasteful, asking “How can you gamble on mass death?” and “Isn’t it unjust to exclude other people from the refuge?” But the idea is that these markets would allow a Wisdom of Crowds effect to signal to observers which existential risks were growing in danger. I suspect the idea of these tickets to safe refuges will prove impractical, but anything that will help us to escape from our collective biases on these literally earth-shattering topics will be welcome.
(Aside: Robin and Eliezer jointly run a fast throughput blog called “Overcoming bias” that is dedicated to the question “How can we obtain beliefs closer to reality?”)
Robin’s talk also contained the memorable image that the problem with slipping on a staircase isn’t that of falling down one step, but of initiating an escalation effect of tumbling down the whole staircase. Likewise, the biggest consequences of the risks covered in the conference aren’t that they will occur in isolation, but that they might trigger a series of inter-related collapses. On a connected point, Peter Taylor mentioned that the worldwide re-insurance industry would have collapsed altogether if a New Orleans scale weather-induced disaster had followed hot on the heels of the 9-11 tragedies – the system would have had no time to recover. It was a sobering reminder of the potential fragility of much of what we take for granted.
Footnote: For other coverage of this conference, see Ronald Bailey’s comments in Reason. There’s also a 500+ page book co-edited by Nick Bostrom and Milan Cirkovic that contains chapter versions of many of the presentations from the conference (plus some additional material).