dw2

19 June 2020

Highlighting probabilities

Filed under: communications, education, predictability, risks — Tags: , , — David Wood @ 7:54 pm

Probabilities matter. If society fails to appreciate probabilities, and insists on seeing everything in certainties, a bleak future awaits us all (probably).

Consider five predictions, and common responses to these predictions.

Prediction A: If the UK leaves the EU without a deal, the UK will experience a significant economic downturn.

Response A: We’ve heard that prediction before. Before the Brexit vote, it was predicted that a major economic downturn would happen straightaway if the result was “Leave”. That downturn failed to take place. So we can discard the more recent prediction. It’s just “Project Fear” again.

Prediction B (made in Feb 2020): We should anticipate a surge in infections and deaths from Covid-19, and take urgent action to prevent transmissions.

Response B: We’ve heard that prediction before. Bird flu was going to run havoc. SARS and MERS, likewise, were predicted to kill hundreds of thousands. These earlier predictions were wrong. So we can discard the more recent prediction. It’s just “Project Pandemic” again.

Prediction C: We should prepare for the advent of artificial superintelligence, the most disruptive development in all of human history.

Response C: We’ve heard that prediction before. AIs more intelligent than humans have often been predicted. No such AI has been developed. These earlier predictions were wrong. So there’s no need to prepare for ASI. It’s just “Project Hollywood Fantasy” again.

Prediction D: If we don’t take urgent action, the world faces a disaster from global warming.

Response D: We’ve heard that prediction before. Climate alarmists told us some time ago “you only have twelve years to save the planet”. Twelve years passed, and the planet is still here. So we can ignore what climate alarmists are telling us this time. It’s just “Project Raise Funding for Climate Science” again.

Prediction E (made in mid December 1903): One day, humans will fly through the skies in powered machines that are heavier than air.

Response E: We’ve heard that prediction before. All sorts of dreamers and incompetents have naively imagined that the force of gravity could be overcome. They have all come to ruin. All these projects are a huge waste of money. Experts have proved that heavier than air flying machines are impossible. We should resist this absurdity. It’s just “Langley’s Folly” all over again.

The vital importance of framing

Now, you might think that I write these words to challenge the scepticism of the people who made the various responses listed. It’s true that these responses do need to be challenged. In each case, the response involves an unwarranted projection from the past into the future.

But the main point on my mind is a bit different. What I want to highlight is the need to improve how we frame and present predictions.

In all the above cases – A, B, C, D, E – the response refers to previous predictions that sounded similar to the more recent ones.

Each of these earlier predictions should have been communicated as follows:

  • There’s a possible outcome we need to consider. For example, the possibility of an adverse economic downturn immediately after a “Leave” vote in the Brexit referendum.
  • That outcome is possible, though not inevitable. We can estimate a rough probability of it happening.
  • The probability of the outcome will change if various actions are taken. For example, swift action by the Bank of England, after a Leave vote, could postpone or alleviate an economic downturn. Eventually leaving the EU, especially without a deal in place, is likely to accelerate and intensify the downturn.

In other words, our discussions of the future need to embrace uncertainty, and need to emphasise how human action can alter that uncertainty.

What’s more, the mention of uncertainty must be forceful, rather than something that gets lost in small print.

So the message itself must be nuanced, but the fact that the message is nuanced must be underscored.

All this makes things more complicated. It disallows any raw simplicity in the messaging. Understandably, many activists and enthusiasts prefer simple messages.

However, if a message has raw simplicity, and is subsequently seen to be wrong, observers will be likely to draw the wrong conclusion.

That kind of wrong conclusion lies behind each of flawed responses A to E above.

Sadly, lots of people who are evidently highly intelligent fail to take proper account of probabilities in assessing predictions of the future. At the back of their minds, an argument like the following holds sway:

  • An outcome predicted by an apparent expert failed to materialise.
  • Therefore we should discard anything else that apparent expert says.

Quite likely the expert in question was aware of the uncertainties affecting their prediction. But they failed to emphasise these uncertainties strongly enough.

Transcending cognitive biases

As we know, we humans are prey to large numbers of cognitive biases. Even people with a good education, and who are masters of particular academic disciplines, regularly fall foul of these biases. They seem to be baked deep into our brains, and may even have conveyed some survival benefit, on average, in times long past. In the more complicated world we’re now living in, we need to help each other to recognise and resist the ill effects of these biases. Including the ill effects of the “probability neglect” bias which I’ve been writing about above.

Indeed, one of the most important lessons from the current chaotic situation arising from the Covid-19 pandemic is that society in general needs to raise its understanding of a number of principles related to mathematics:

  • The nature of exponential curves – and how linear thinking often comes to grief, in failing to appreciate exponentials
  • The nature of probabilities and uncertainties – and how binary thinking often comes to grief, in failing to appreciate probabilities.

This raising of understanding won’t be easy. But it’s a task we should all embrace.

Image sources: Thanasis Papazacharias and Michel Müller from Pixabay.

Footnote 1: The topic of “illiteracy about exponentials and probabilities” is one I’ll be mentioning in this Fast Future webinar taking place on Sunday evening.

Footnote 2: Some people who offer a rationally flawed response like the ones above are, sadly, well aware of the flawed nature of their response, but they offer it anyway. They do so since they believe the response may well influence public discussion, despite being flawed. They put a higher value on promoting their own cause, rather than on keeping the content of the debate as rational as possible. They don’t mind adding to the irrationality of public discussion. That’s a topic for a separate discussion, but it’s my view that we need to find both “carrots” and “sticks” to discourage people from deliberately promoting views they know to be irrational. And, yes, you guessed it, I’ll be touching on that topic too on Sunday evening.

12 May 2020

Five scenarios to unwind the lockdown. Are there more?

Filed under: challenge, healthcare, politics, risks — Tags: , , — David Wood @ 1:55 pm

The lockdown has provided some much-needed breathing space. As a temporary measure, it has helped to prevent our health services from becoming overwhelmed. In many (though not yet in all) countries, the curves of death counts have been slowed, and then tilted downwards. Financial payments to numerous employees unable to work have been very welcome.

As such, the lockdown – adopted in part by individuals and families making their own prudent decisions, and in part due to government advice and edict – can be assessed, provisionally, as a short-term success, given the frightful circumstances in which it emerged.

But what next? The present set of restrictions seems unsustainable. Might a short-term success transition into a medium-term disaster?

The UK’s Chancellor of the Exchequor, Rishi Sunak, recently gave the following warning, referring to payments made by the government to employees whose companies have stopped paying them:

We are potentially spending as much on the furlough scheme as we do on the NHS… Clearly that is not a sustainable situation

What’s more, people who have managed to avoid meeting friends and relatives for two or three months, may become overwhelmed by the increasing strain of separation, especially as mental distress accumulates, or existing family relations rupture.

But any simple unwinding of the lockdown seems fraught with danger. Second waves of infection could shoot up, once social distancing norms are relaxed. In country after country around the world, tentative steps to allow greater physical proximity have already led to spikes in the numbers of infections, followed by reversals of the relaxation. I recently shared on my social media this example from South Korea:

South Korea: bars and nightclubs to close down for 30 more days after health officials tracked 13 new Covid cases to a single person who attended 5 nightclubs and bars in the country’s capital city of Seoul

One response on Twitter was the single word “Unsustainable”. And on Facebook my post attracted comments criticising the approach taken in South Korea:

It is clear Korea is going to be looking over its shoulder for the indefinite future with virtually no immunity in the population.

I have considerable sympathy with the critics: We need a better solution than simply “crossing fingers” and nervously “looking over the shoulder”.

So what are the scenarios for unwinding the lockdown, in a way that avoids the disasters of huge new spikes of deaths and suffering, or unprecedented damage to the global economy?

To be clear, I’m not talking here about options for restructuring society after the virus has been defeated. These are important discussions, and I favour options for a Great Reconsideration. But these are discussions for another day. First, we need to review scenarios for actually defeating the virus.

Without reaching clarity about that overall plan, what we can expect ahead is, alas, worse confusion, worse recrimination, worse health statistics, worse economic statistics, and a worse fracturing of society.

Scenario 1: Accelerate a cure

One scenario is to keep most of society in a state of social distancing until such time as a vaccine has been developed and deployed.

That was the solution in, for example, the 2011 Steven Soderbergh Hollywood film “Contagion”. After a few setbacks, plucky scientists came to the rescue. And in the real world in 2020, after all, we have Deep Learning and advanced biotech to help us out. Right?

The main problem with this scenario is that it could take up to 18 months. Or even longer. Although teams around the world are racing towards potential solutions, we won’t know for some time whether their ideas will prove fruitful. Bear in mind that Covid-19 is a coronavirus, and the number of successful vaccines that have been developed for other coronaviruses is precisely zero. Technology likely will defeat the virus in due course, but no-one can be confident about the timescales.

A variant of this scenario is that other kinds of medical advance could save the day: antivirals, plasma transfers, antimalarials, and so on. Lifespan.io has a useful page tracking progress with a range of these potential therapeutics. Again, there are some hopeful signs, but, again, the outcomes remain uncertain.

So whilst there’s a strong case for society getting more fully behind a considerable number of these medical research projects, we’ll need in parallel to consider other scenarios for unwinding the lockdown. Read on.

Scenario 2: Exterminate the virus

A second scenario is that society will become better at tracking and controlling instances of the virus. Stage by stage, regions of the planet could be declared as having, not just low rates of infectious people, but as having zero rates of infectious people.

In that case, we will be freed from the risk of contracting Covid-19, not because we have been vaccinated, but because there are no longer any infectious people with whom we can come into contact.

It would be similar to how smallpox was gradually made extinct. That virus no longer exists in the wild. One difference, however, is that the fight against smallpox was aided, since 1796, by a vaccine. The question with Covid-19 is whether it could be eradicated without the help of a vaccine. Could it be eradicated by better methods of:

  • Tracking which people are infectious
  • Isolating people who are infectious
  • Preventing travel between zones with infections and those without infections?

This process would be helped once there are reliable tests to ascertain whether someone has actually had the virus. However, things would become more complicated if the virus can recur (as has sometimes been suggested).

Is this scenario credible? Perhaps. It’s worth further investigation. But it seems a long shot, bearing in mind it would need only a single exception to spark a new flare up of infections. Bear in mind that it was only a single infectious hotspot that kick-started this whole global pandemic in the first place.

Scenario 3: Embrace economic reversal

If Scenario 1 (accelerate a cure) and Scenario 2 (exterminate the virus) will each take a long time – 18 months or more – what’s so bad about continuing in a state of lockdown throughout that period? That’s the core idea of Scenario 3. That scenario has the name “Embrace economic reversal” because of the implication of many people being unable to return to work. But would that be such a bad thing?

This scenario envisions a faster adoption of some elements of what has previously been spoken about as a possible longer term change arising from the pandemic – the so-called Great Reconsideration mentioned above:

  • Less commuting
  • Less pollution
  • Less time spent in offices
  • Less time spent in working for a living
  • Appreciation of life freed from a culture of conspicuous consumption
  • Valuing human flourishing instead of GDP
  • Adoption of a Universal Basic Income, and/or alternatives

If these things are good, why delay their adoption?

In short, if the lockdown (or something like it) were to continue in place for 18 months or longer, would that really be such a bad outcome?

The first problem with this scenario is that the lockdown isn’t just getting in the way of parts of life that, on reflection, we might do without. It’s also getting in the way of many of the most precious aspects of life:

  • Meeting people in close physical proximity as well as virtually
  • Choosing to live with a different group of people.

A second problem is that, whilst the true value of many aspects of current economic activity can be queried, other parts of that economy play vital support roles for human flourishing. For as long as a lockdown continues, these parts of the economy will suffer, with consequent knock-on effects for human flourishing.

Finally, although people who are reasonably well off can cope (for a while, at least) with the conditions of the lockdown, many others are already nearing the ends of their resources. For such people, the inability to leave their accommodation poses higher levels of stress.

Accordingly, whilst it is a good idea to reconsider which aspects of an economy really matter, it would be harsh advice to simply tell everyone that they need to take economic decline “on the chin”. For too many people, such a punch would be a knock-out blow.

Scenario 4: Accept higher death statistics

A different idea of taking the crisis “on the chin” is to accept, as a matter of practicality, that more people than usual will die, if there’s a reversal of the conditions of lockdown and social distancing.

In this scenario, what we should accept, isn’t (as in Scenario 3) a reversal of economic statistics, but a reversal (in the short-term) of health statistics.

In this scenario, a rise in death statistics is bad, but it’s not the end of society. Periodically, death statistics do rise from time to time. So long as they can still be reasonably controlled, this might be the least worst option to consider. We shouldn’t become unduly focused on what are individual tragedies. Accordingly, let people return to whatever kinds of interaction they desire (but with some limitations – to be discussed below). The economy can restart. And people can once again enjoy the warmth of each others’ presence – at music venues, at sports grounds, in family gatherings, and on long-haul travel holidays.

Supporters of this scenario sometimes remark that most of the people who die from Covid-19 probably would have died of other causes in a reasonably short period of time, regardless. The victims of the virus tend to be elderly, or to have underlying health conditions. Covid-19 might deprive an 80 year old of an additional 12 months of life. From a utilitarian perspective, is that really such a disastrous outcome?

The first problem with this scenario is that we don’t know quite how bad the surge in death statistics might be. Estimates vary of the fatality rate among people who have been infected. We don’t yet know, reliably, what proportion of the population have been infected without even knowing that fact. It’s possible that the fatality rate will actually prove to be relatively low. However, it’s also possible that the rate might rise:

  • If the virus mutates (as it might well do) into a more virulent form
  • If the health services become overwhelmed with an influx of people needing treatment.

Second, as is evident from the example of the UK’s Prime Minister, Boris Johnson, people who are far short of the age of 80, and who appear to be in general good health, can be brought to death’s door from the disease.

Third, even when people with the virus survive the infection, there may be long-term consequences for their health. They may not die straightaway, but the quality of their lives in future years could be significantly impaired.

Fourth, many people recoil from the suggestion that it’s not such a bad outcome if an 80 year old dies sooner than expected. In their view, all lives area valuable – especially in an era when an increasing number of octogenarians can be expected to live into their 100s. We are struck by distaste at any narrow utilitarian calculation which diminishes the value of individual lives.

For these reasons, few writers are quite so brash as to recommend Scenario 4 in the form presented here. Instead, they tend to advocate a variant of it, which I will now describe under a separate heading.

Scenario 5: A two-tier society

Could the lockdown be reconfigured so that we still gain many of its most important benefits – in particular, protection of those who are most vulnerable – whilst enabling the majority of society to return to life broadly similar to before the virus?

In this scenario, people are divided into two tiers:

  • Those for whom a Covid infection poses significant risks to their health – this is the “high risk” tier
  • Those who are more likely to shrug off a Covid infection – this is the “low risk” tier.

Note that the level of risk refers to how likely someone is to die from being infected.

The idea is that only the high risk tier would need to remain in a state of social distancing.

This idea is backed up by the thought that the division into two tiers would only need to be a temporary step. It would only be needed until one of three things happen:

  • A reliable vaccine becomes available (as in Scenario 1)
  • The virus is eradicated (as in Scenario 2)
  • The population as a whole gains “herd immunity”.

With herd immunity, enough people in the low risk tier will have passed through the phase of having the disease, and will no longer be infectious. Providing they can be assumed, in such a case, to be immune from re-infection, this will cut down the possibility of the virus spreading further. The reproduction number, R, will therefore fall well below 1.0. At that time, even people in the high risk tier can be readmitted into the full gamut of social and physical interactions.

Despite any initial hesitation over the idea of a two-tier society, the scenario does have its attractions. It is sensible to consider in more detail what it would involve. I list some challenges that will need to be addressed:

  • Where there are communities of people who are all in the high risk tier – for example, in care homes, and in sheltered accommodation – special measures will still be needed, to prevent any cases of infection spreading quickly in that community once they occasionally enter it (the point here is that R might be low for the population as a whole, but high in such communities)
  • Families often include people in both tiers. Measures will be needed to ensure physical distancing within such homes. For example, children who mix freely with each other at school will need to avoid hugging their grandparents
  • It will be tricky – and controversial – to determine which people belong in which tier (think, again, of the example of Boris Johnson)
  • The group of people initially viewed as being low risk may turn out to have significant subgroups that are actually at higher risk – based on factors such as workplace practice, genetics, diet, or other unsuspected underlying cases – in which case the death statistics could surge way higher than expected
  • Are two tiers of classification sufficient? Would a better system have three (or more) tiers, with special treatments for pregnant women, and for people who are somewhat elderly (or somewhat asthmatic) rather than seriously elderly (or seriously asthmatic)?
  • The whole concept of immunity may be undermined, if someone who survives an initial infection is still vulnerable to a second infection (perhaps from a new variant of the virus)

Scenario 6: Your suggestions?

Of course, combinations of the above scenarios can, and should, be investigated.

But I’ll finish by asking if there are other dimensions to this landscape of scenarios, that deserve to be included in the analysis of possibilities.

If so, we had better find out about them sooner rather than later, and discuss them openly and objectively. We need to get beyond future shock, and beyond tribal loyalty instincts.

That will reduce the chances that the outcome of the lockdown will be (as stated earlier) worse confusion, worse recrimination, worse health statistics, worse economic statistics, and a worse fracturing of society.

Image credit: Priyam Patel from Pixabay.

19 March 2020

Improving online events, for the sake of a better discussion of what truly matters

In a time of travel restrictions and operating from home, we’re all on a learning curve. There’s much for us to find out about alternatives to meeting in our usual physical locations.

London Futurists have been meeting in various physical locations for twelve years. We’ve also held a number of online gatherings over that time, using tools such as Google Hangouts on Air. But now the balance needs to shift. Given the growing Covid-19 lockdown, all London Futurists physical meetings are cancelled for the time being. While the lockdown continues, the group’s activities will be 100% online.

But what does this mean in practice?

I’d like to share some reflections from the first of this new wave of London Futurists events. That online gathering took place on Saturday, 14th March, using the meeting platform Zoom.

Hopefully my observations can help others to improve their own online events. Hopefully, too, readers of this blog will offer answers or suggestions in response to questions I raise.

Context: our event

Our event last Saturday was recorded, and the footage subsequently edited – removing, for example, parts where speakers needed to be told their microphones were muted. Here’s a copy of the resulting video:

By prior arrangement, five panellists gave short introductory talks, each lasting around 5-10 minutes, to set the stage for group discussion. Between 50 and 60 audience participants were logged into the event throughout. Some of them spoke up during the event; a larger number participated in an online text chat discussion that proceeded in parallel (there’s a lightly edited copy of the text discussion here).

As you can see from the recording, the panellists and the other participants raised lots of important points during the discussion. I’ll get back to these shortly, in another blogpost. But first, some thoughts about the tools and the process that were used for this event.

Context: Zoom

Zoom is available at a number of different price levels:

  • The “Free” level is restricted to meetings of up to 40 minutes.
  • The “Pro” level – which costs UKP £11.99 per month – supports longer meetings (up to 24 hours), recording of events, and other elements of admin and user management. This is what I use at the moment.
  • I’ve not yet explored the more expensive versions.

Users participating in an event can can turn their cameras on or off, and can share their screen (in order, for example, to present slides). Participants can also choose at any time to see a view of the video feeds from all participants (up to 25 on each page), or a “presenter view” that focuses on the person who Zoom detects as the speaker.

Recording can take place locally, on the host’s computer (and, if enabled by the organiser, on participants’ computers). Recording can also take place on the Zoom cloud. In this case, what is recorded (by default) is the “presenter view”.

The video recording can subsequently be downloaded and edited (using any video editing software – what I use is Cyberlink PowerDirector).

Limitations and improvements

I switched some time ago from Google Hangouts-on-Air (HoA) to Zoom, when Google reorganised their related software offerings during 2019.

One feature of the HoA software that I miss in Zoom is the ability for the host to temporarily “blue box” a participant, so that their screen remains highlighted, regardless of which video feeds contain speech or other noises. Without this option, what happens – as you can see from the recording of Saturday’s event – is that the presentation view can jump to display the video from a participant that is not speaking at that moment. For five seconds or so, the display shows the participant staring blankly at the camera, generally without realising that the focus is now on them. What made Zoom shift the focus is that it detected some noise from that video feed -perhaps a cough, a laugh, a moan, a chair sliding across the floor, or some background discussion.

(Participants in the event needn’t worry, however, about their blank stares or other inadvertent activity being contained in the final video. While editing the footage, I removed all such occurrences, covering up the displays, while leaving the main audio stream in place.)

In any case, participants should mute their microphones when not speaking. That avoids unwanted noise reaching the event. However, it’s easy for people to neglect to do so. For that reason, Zoom provides the host with admin control over which mics are on or off at any time. But the host may well be distracted too… so the solution is probably for me to enrol one or two participants with admin powers for the event, and ask them to keep an eye on any mics being left unmuted at the wrong times.

Another issue is the variable quality of the microphones participants were using. If the participant turns their head while speaking – for example, to consult some notes – it can make it hard to hear what they’re saying. A better solution here is to use a head-mounted microphone.

A related problem is occasional local bandwidth issues when a participant is speaking. Some or all of what they say may be obscured, slurred, or missed altogether. The broadband in my own house is a case in point. As it happens, I have an order in the queue to switch my house to a different broadband provider. But this switch is presently being delayed.

Deciding who speaks

When a topic is thought-provoking, there are generally are lots of people with things to contribute to the discussion. Evidently, they can’t all talk at once. Selecting who speaks next – and deciding how long they can speak before they might need to be interrupted – is a key part of chairing successful meetings.

One guide to who should be invited to speak next, at any stage in a meeting, is the set of comments raised in the text chat window. However, in busy meetings, important points raised can become lost in the general flow of messages. Ideally, the meeting software will support a system of voting, so that other participants can indicate their choices of which questions are the most interesting. The questions that receive the most upvotes will become the next focus of the discussion.

London Futurists have used such software in the past, including Glisser and Slido, at our physical gatherings. For online events, ideally the question voting mechanism will be neatly integrated with the underlying platform.

I recently took part in one online event (organised by the Swiss futurist Gerd Leonhard) where the basic platform was Zoom and where there was a “Q&A” voting system for questions from the audience. However, I don’t see such a voting system in the Zoom interface that I use.

Added on 20th March

Apparently there’s a Webinar add-on for Zoom that provides better control of meetings, including the Q&A voting system. The additional cost of this add-on starts from UKP £320 per annum. I’ll be looking into this further. See this feature comparison page.

Thanks to Joe Kay for drawing this to my attention!

Summarising key points

The video recording of our meeting on Saturday lasts nearly 100 minutes. To my mind, the discussion remained interesting throughout. However, inevitably, many potential viewers will hesitate before committing 100 minutes of their time to watch the entirety of that recording. Even if they watch the playback at an accelerated speed, they would probably still prefer access to some kind of edited highlights.

Creating edited highlights of recordings of London Futurists events has long been a “wish list” item for me. I can appreciate that there’s a particular skill to identifying which parts should be selected for inclusion in any such summary. I’ll welcome suggestions on how to do this!

Learning together

More than ever, what will determine our success or failure in coming to terms with the growing Covid-19 crisis is the extent to which positive collaboration and a proactive technoprogressive mindset can pull ahead of humanity’s more destructive characteristics.

That “race” was depicted on the cover of the collection of the ebook of essays published by London Futurists in June 2014, “Anticipating 2025”. Can we take advantage of our growing interconnectivity to spread, not dangerous pathogens or destructive “fake news”, but good insights about building a better future?

That was a theme that emerged time and again during our online event last Saturday.

I’ll draw this blogpost towards a close by sharing some excepts from the opening chapter from Anticipating 2025.

Four overlapping trajectories

The time period up to 2025 can be considered as a race involving four overlapping trajectories: technology, crisis, collaboration, and mindset.

The first trajectory is the improvement of technology, with lots of very positive potential. The second, however, has lots of very negative potential: it is the growth in likelihood of societal crisis:

  • Stresses and strains in the environment, with increased climate chaos, and resulting disputes over responsibility and corrective action
  • Stresses and strains in the financial system, which share with the environment the characteristics of being highly complex, incompletely understood, weakly regulated, and subject to potential tipping points for fast-accelerating changes
  • Increasing alienation, from people who feel unable to share in the magnitude of the riches flaunted by the technologically fortunate; this factor is increased by the threats from technological unemployment and the fact that, whilst the mean household income continues to rise, the median household income is falling
  • Risks from what used to be called “weapons of mass destruction” – chemical, biological, or even nuclear weapons, along with cyber-weapons that could paralyse our electronics infrastructure; there are plenty of “angry young men” (and even angry middle-aged men) who seem ready to plunge what they see as a corrupt world into an apocalyptic judgement.

What will determine the outcome of this race, between technological improvement and growing risk of crises? It may be a third trajectory: the extent to which people around the world are able to collaborate, rather than compete. Will our tendencies to empathise, and to build a richer social whole, triumph over our equally deep tendencies to identify more closely with “people like us” and to seek the well-being of our “in-group” ahead of that of other groups?

In principle, we probably already have sufficient knowledge, spread around the world, to solve all the crises facing us, in a smooth manner that does not require any significant sacrifices. However, that knowledge is, as I said, spread – it does not cohere in just a single place. If only we knew what we knew. Nor does that knowledge hold universal assent – far from it. It is mocked and distorted and undermined by people who have vested interests in alternative explanations – with the vested interests varying among economic, political, ideological, and sometimes sheer human cussedness. In the absence of improved practical methods for collaboration, our innate tendencies to short-term expedience and point-scoring may rule the day – especially when compounded by an economic system that emphasises competition and “keeping up with the Joneses”.

Collaborative technologies such as Wikipedia and open-source software point the way to what should be possible. But they are unlikely to be sufficient, by themselves, to heal the divisions that tend to fragment human endeavours. This is where the fourth, and final, trajectory becomes increasingly important – the transformation of the philosophies and value systems that guide our actions.

If users are resolutely suspicious of technologies that would disturb key familiar aspects of “life as we know it”, engineers will face an uphill battle to secure sufficient funding to bring these technologies to the market – even if society would eventually end up significantly improved as a result.

Politicians generally take actions that reflect the views of the electorate, as expressed through public media, opinion polls, and (occasionally) in the ballot box. However, the electorate is subject to all manners of cognitive bias, prejudice, and continuing reliance on rules of thumb which made sense in previous times but which have been rendered suspect by changing circumstances. These viewpoints include:

  • Honest people should put in forty hours of work in meaningful employment each week
  • People should be rewarded for their workplace toil by being able to retire around the age of 65
  • Except for relatively peripheral matters, “natural methods” are generally the best ones
  • Attempts to redesign human nature – or otherwise to “play God” – will likely cause disaster
  • It’s a pointless delusion to think that the course of personal decay and death can be averted.

In some cases, long-entrenched viewpoints can be overturned by a demonstration that a new technology produces admirable results – as in the case of IVF (in-vitro fertilisation). But in other cases, minds need to be changed even before a full demonstration can become possible.

It’s for this reason that I see the discipline of “culture engineering” as being equally important as “technology engineering”. The ‘culture’ here refers to cultures of humans, not cells. The ‘engineering’ means developing and applying a set of skills – skills to change the set of prevailing ideas concerning the desirability of particular technological enhancements. Both technology engineering and culture engineering are deeply hard skills; both need a great deal of attention.

A core part of “culture engineering” fits under the name “marketing”. Some technologists bristle at the concept of marketing. They particularly dislike the notion that marketing can help inferior technology to triumph over superior technology. But in this context, what do “inferior” and “superior” mean? These judgements are relative to how well technology is meeting the dominant desires of people in the marketplace.

Marketing means selecting, understanding, inspiring, and meeting key needs of what can be called “influence targets” – namely, a set of “tipping point” consumers, developers, and partners. Specifically, marketing includes:

  • Forming a roadmap of deliverables, that build, step-by-step, to delivering something of great benefit to the influence targets, but which also provide, each step of the way, something with sufficient value to maintain their active interest
  • Astutely highlighting the ways in which present (and forthcoming) products will, indeed, provide value to the influence targets
  • Avoiding any actions which, despite the other good things that are happening, alienate the influence targets; and in the event any such alienation emerges, taking swift and decisive action to address it.

Culture engineering involves politics as well as marketing. Politics means building alliances that can collectively apply power to bring about changes in regulations, standards, subsidies, grants, and taxation. Choosing the right partners, and carefully managing relationships with them, can make a big difference to the effectiveness of political campaigns. To many technologists, “politics” is as dirty a word as “marketing”. But once again, mastery of the relevant skillset can make a huge difference to the adoption of technologies.

The final component of culture engineering is philosophy – sets of arguments about fundamentals and values. For example, will human flourishing happen more fully under simpler lifestyles, or by more fully embracing the radical possibilities of technology? Should people look to age-old religious traditions to guide their behaviour, or instead seek a modern, rational, scientific basis for morality? And how should the freedoms of individuals to experiment with potentially dangerous new kinds of lifestyle be balanced against the needs of society as a whole?

“Philosophy” is (you guessed it) yet another dirty word, in the minds of many technologists. To these technologists, philosophical arguments are wastes of time. Yet again, I will disagree. Unless we become good at philosophy – just as we need to become good at both politics and marketing – we will fail to rescue the prevailing culture from its unhelpful mix of hostility and apathy towards the truly remarkable potential to use technology to positively transcend human nature. And unless that change in mindset happens, the prospects are uncertain for the development and adoption of the remarkable technologies of abundance mentioned earlier.

[End of extract from Anticipating 2025.]

How well have we done?

On the one hand, the contents of the 2014 London Futurists book “Anticipating 2025” are prescient. These chapters highlight many issues and opportunities that have grown in importance in the intervening six years.

On the other hand, I was brought down to earth by an email reply I received last week to the latest London Futurists newsletter:

I’m wondering where the Futurism is in this reaction.

Maybe the group is more aptly Reactionism.

I wanted to splutter out an answer: the group (London Futurists) has done a great deal of forward thinking over the years. We have looked at numerous trends and systems, and considered possible scenarios arising from extrapolations and overlaps. We have worked hard to clarify, for these scenarios, the extent to which they are credible and desirable, and ways in which the outcomes can be influenced.

But on reflection, a more sober thought emerged. Yes, we futurists have been trying to alert the rest of society to our collective lack of preparedness for major risks and major opportunities ahead. We have discussed the insufficient resilience of modern social systems – their fragility and lack of sustainability.

But have our messages been heard?

The answer is: not really. That’s why Covid-19 is causing such a dislocation.

It’s tempting to complain that the population as a whole should have been listening to futurists. However, we can also ask, how should we futurists change the way we talk about our insights, so that people pay us more attention?

After all, there are many worse crises potentially just around the corner. Covid-19 is by no means the most dangerous new pathogen that could strike humanity. And there are many other types of risk to consider, including malware spreading out of control, the destruction of our electronics infrastructure by something similar to the 1859 Carrington Event, an acceleration of chaotic changes in weather and climate, and devastating wars triggered by weapons systems overseen by AI software whose inner logic no-one understands.

It’s not just a new mindset that humanity needs. It’s a better way to have discussions about fundamentals – discussions about what truly matters.

Footnote: with thanks

Special thanks are due to the people who boldly stepped forwards at short notice as panellists for last Saturday’s event:

and to everyone else who contributed to that discussion. I’m sorry there was no time to give sufficient attention to many of the key points raised. As I said at the end of the recording, this is a kind of cliffhanger.

24 June 2019

Superintelligence, Rationality, and the Race to Save the World

Filed under: AGI, books, irrationality, risks — Tags: , , , , , — David Wood @ 11:45 pm

What the world needs, urgently, is more rationality. It needs a greater number of people to be aware of the mistakes that are, too often, made due to flaws and biases in our thinking processes. It needs a community that can highlight the most important principles of rationality – a community that can help more and more people to learn, step-by-step, better methods of applied rationality. And, critically, the world needs a greater appreciation of a set of existential risks that threaten grave consequences for the future of humanity – risks that include misconfigured artificial superintelligence.

These statements express views held by a community known sometimes as “Less Wrong” (the name of the website on which many of the key ideas were developed), and sometimes, more simply, as “the rationalists”. That last term is frequently used in a new book by science writer Tom Chivers – a book that provides an accessible summary of the Less Wrong community. As well as being accessible, the summary is friendly, fair-minded, and (occasionally) critical.

The subtitle of Chivers’ book is straightforward enough: “Superintelligence, Rationality, and the Race to Save the World”. The race is between, on the one hand, the rapid development of technology with additional capabilities, and on the other hand, the development of suitable safety frameworks to ensure that this technology allows humanity to flourish rather than destroying us.

The title of the book takes a bit more explaining: “The AI Does Not Hate You”.

This phrase is a reference to a statement by one of the leading thinkers of the community in question, Eliezer Yudkowsky:

The AI does not hate you, nor does it love you, but you are made of atoms which it can use for something else.

In other words, the existential risk posed by artificial superintelligence isn’t that it will somehow acquire the human characteristic of hatred, but that it will end up following a trajectory which is misaligned with the best interests of humanity – a trajectory that sees humans as a kind of irrelevance.

To be clear, I share this worry. I’ve given my reasons many times on this personal blog, and I wrote up my own analysis at some length in chapter 9, “Towards abundant intelligence”, in my most recent book, “Sustainable superabundance”. My ideas have been shaped and improved by many things I’ve learned over the years from members of the Less Wrong community. Indeed, my presentations about the future of AI generally include several quotations from Yudkowsky.

However, these ideas often cause a kind of… embarrassment. Various writers on AI have poured scorn on them. Artificial superintelligence won’t arrive any time soon, they assert. Or if it does, it will be easy to keep under human control. Or if it transcends human control, there’s no reason to be alarmed, because its intelligence will automatically ensure that it behaves impeccably towards humans. And so on.

These critics often have a second string to their analysis. Not only do they argue for being relaxed about the idea of existential risks from superintelligence. They also argue that people who do worry about these risks – people like Yudkowsky, or Oxford University’s Nick Bostrom, or Stephen Hawking, or Elon Musk – are somehow personally defective. (“They’re egotistical”, runs one complaint. “There’s no need to pay any attention to these people”, the critics continue, “since they’re just philosophers, or mathematicians, or physicists, or business people, etc, rather than being a real AI expert”.)

At an extreme, this set of criticisms expresses itself in the idea that the Less Wrong community is a “cult“. A related objection is that a focus on humanity’s potential extinction is a distraction from much more pressing real-world issues of the present-day and near-term future – issues such as AI algorithms being biased, or AI algorithms stirring up dangerous social divisions, or increasing economic inequality, or disrupting employment, or making weapons more dangerous.

It’s in this context that the book by Chivers arrives. It tackles head-on the controversies around the Less Wrong community – controversies over its ideas, methods, aims, and the lifestyles and personalities of many of its leading figures. It does this carefully and (for the most part) engagingly.

As the book proceeds, Chivers gives voice to the various conflicting ideas he finds in himself regarding the core ideas of the Less Wrong community. My own judgement is that his assessments are fair. He makes it clear that, despite its “weird” angles, the community deserves more attention – much more attention – for its core ideas, and for the methods of rationality that it advocates.

It’s a cerebral book, but with considerable wit. And there are some touching stories in it (especially – spoiler alert – towards the end).

The book provides the very useful service of providing short introductions to many topics on which the Less Wrong community has written voluminously. On many occasions over the years, I’ve clicked into Less Wrong material, found it to be interesting, but also… long. Oh-so-long. And I got distracted long before I reached the punchline. In contrast, the book by Chivers is divided up into digestible short chunks, with a strong sense of momentum throughout.

As for the content of the book, probably about 50% was material that I already knew well, and which gave me no surprise. About 30% was material with which I was less familiar, and which filled in gaps in my previous understanding. That leaves perhaps 20% of the content which was pretty new to me.

I can’t say that the book has made me change my mind about any topic. However, it has made me want to find out more about the courses offered by CFAR (the Center For Applied Rationality), which features during various episodes Chivers recounts. And I’m already thinking of ways in which I’ll update my various slidesets, on account of the ideas covered in the book.

In summary, I would recommend this book to anyone who has heard about Less Wrong, Eliezer Yudkowsky, Nick Bostrom, or others in the extended rationalist community, and who is unsure what to think about the ideas they champion. This book will give you plenty of help in deciding how seriously you should take these ideas. You’ll find good reasons to counter the voices of those critics who seek (for whatever reasons) to belittle the Less Wrong community. And if you end up more worried than before about the existential risks posed by artificial superintelligence, that’s no bad thing!

PS1: For a 10 minute audio interview in which Tom Chivers talks about his book, visit this Monocle page.

PS2: If you want to see what the Less Wrong community members think about this book, visit this thread on the Less Wrong site.

7 December 2017

The super-opportunities and super-risks of super-AI

Filed under: AGI, Events, risks, Uncategorized — Tags: , , — David Wood @ 7:29 pm

2017 has seen more discussion of AI than any preceding year.

There has even been a number of meetings – 15, to be precise – in the UK Houses of Parliament, of the APPG AI – an “All-Party Parliamentary Group on Artificial Intelligence”.

According to its website, the APPG AI “was set up in January 2017 with the aim to explore the impact and implications of Artificial Intelligence”.

In the intervening 11 months, the group has held 7 evidence meetings, 4 advisory group meetings, 2 dinners, and 2 receptions. 45 different MPs, along with 7 members of the House of Lords and 5 parliamentary researchers, have been engaged in APPG AI discussions at various times.

APPG-AI

Yesterday evening, at a reception in Parliament’s Cholmondeley Room & Terrace, the APPG AI issued a 12 page report with recommendations in six different policy areas:

  1. Data
  2. Infrastructure
  3. Skills
  4. Innovation & entrepreneurship
  5. Trade
  6. Accountability

The headline “key recommendation” is as follows:

The APPG AI recommends the appointment of a Minister for AI in the Cabinet Office

The Minister would have a number of different responsibilities:

  1. To bring forward the roadmap which will turn AI from a Grand Challenge to a tool for untapping UK’s economic and social potential across the country.
  2. To lead the steering and coordination of: a new Government Office for AI, a new industry-led AI Council, a new Centre for Data Ethics and Innovation, a new GovTech Catalyst, a new Future Sectors Team, and a new Tech Nation (an expansion of Tech City UK).
  3. To oversee and champion the implementation and deployment of AI across government and the UK.
  4. To keep public faith high in these emerging technologies.
  5. To ensure UK’s global competitiveness as a leader in developing AI technologies and capitalising on their benefits.

Overall I welcome this report. It’s a definite step in the right direction. Via a programme of further evidence meetings and workshops planned throughout 2018, I expect real progress can be made.

Nevertheless, it’s my strong belief that most of the public discussion on AI – including the discussions at the APPG AI – fail to appreciate the magnitude of the potential changes that lie ahead. There’s insufficient awareness of:

  • The scale of the opportunities that AI is likely to bring – opportunities that might better be called “super-opportunities”
  • The scale of the risks that AI is likely to bring – “super-risks”
  • The speed at which it is possible (though by no means guaranteed) that AI could transform itself via AGI (Artificial General Intelligence) to ASI (Artificial Super Intelligence).

These are topics that I cover in some of my own presentations and workshops. The events organisation Funzing have asked me to run a number of seminars with the title “Assessing the risks from superintelligent AI: Elon Musk vs. Mark Zuckerberg…”

DW Dec Funzing Singularity v2

The reference to Elon Musk and Mark Zuckerberg reflects the fact that these two titans of the IT industry have spoken publicly about the advent of superintelligence, taking opposing views on the balance of opportunity vs. risk.

In my seminar, I take the time to explain their differing points of view. Other thinkers on the subject of AI that I cover include Alan Turing, IJ Good, Ray Kurzweil, Andrew Ng, Eliezer Yudkowsky, Stuart Russell, Nick Bostrom, Isaac Asimov, and Jaan Tallinn. The talk is structured into six sections:

  1. Introducing the contrasting ideas of Elon Musk and Mark Zuckerberg
  2. A deeper dive into the concepts of “superintelligence” and “singularity”
  3. From today’s AI to superintelligence
  4. Five ways that powerful AI could go wrong
  5. Another look at accelerating timescales
  6. Possible responses and next steps

At the time of writing, I’ve delivered this Funzing seminar twice. Here’s a sampling of the online reviews:

Really enjoyed the talk, David is a good presenter and the presentation was very well documented and entertaining.

Brilliant eye opening talk which I feel very effectively conveyed the gravity of these important issues. Felt completely engaged throughout and would highly recommend. David was an excellent speaker.

Very informative and versatile content. Also easy to follow if you didn’t know much about AI yet, and still very insightful. Excellent Q&A. And the PowerPoint presentation was of great quality and attention was spent on detail putting together visuals and explanations. I’d be interested in seeing this speaker do more of these and have the opportunity to go even more in depth on specific aspects of AI (e.g., specific impact on economy, health care, wellbeing, job market etc). 5 stars 🙂

Best Funzing talk I have been to so far. The lecture was very insightful. I was constantly tuned in.

Brilliant weighing up of the dangers and opportunities of AI – I’m buzzing.

If you’d like to attend one of these seminars, three more dates are in my Funzing diary:

Click on the links for more details, and to book a ticket while they are still available 🙂

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

29 August 2014

Can technology bring us peace?

SevereThe summer months of 2014 have brought us a sickening surfeit of awful news. Our newsfeeds have been full of conflict, casualties, and brutalities in Iraq, Syria, Ukraine, Gaza, and so on. For example, just a couple of days ago, my browser screamed at me, Let’s be clear about this: Russia is invading Ukraine right now. And my TV has just informed me that the UK’s terror threat level is being raised from “substantial” to “severe”:

The announcement comes amid increasing concern about hundreds of UK nationals who are believed by security services to have travelled to fight in Iraq and Syria.

These real-world conflicts have been giving rise to online mirror conflicts among many of the people that I tend to respect. These online controversies play out heated disputes about the rights and wrongs of various participants in the real-world battles. Arguments ding-dong ferociously: What is the real reason that MH17 plane was shot down? How disproportionate is the response by Israel to provocations from Hamas? How much is Islamic belief to blame for the barbarism of the self-proclaimed Islamic State? Or is the US to blame, on account of its ill-advised meddling in far-off lands? And how fair is it to compare Putin to Hitler?

But at a recent informal pub gathering of London Futurists, one of the long-time participants in these meetups, Andrius Kasparavicius, asked a hard question. Shouldn’t those of us who believe in the transformational potential of new technology – those of us who dare to call ourselves technoprogressives, transhumanists, or social futurists – have a better answer to these conflict flashpoints? Rather than falling back into twentieth century diatribes against familiar bête noir villains, isn’t it worth striving to find a 21st century viewpoint that transcends such rivalries? We talk a lot about innovation: can’t we be innovative about solving these global flashpoints?

A similar thought gnawed at me a few weeks later, during a family visit to Inverness. A local production of West Side Story was playing at the Eden Court theatre. Bernstein’s music was exhilarating. Sondheim’s lyrics were witty and provocative. The cast shimmied and slunk around the stage. From our vantage point in the second row of seats, we could see all the emotions flit across the faces of the performers. The sudden tragic ending hit hard. And I thought to myself: These two gangs, the Jets and the Sharks, were locked into a foolish, needless struggle. They lacked an adult, future perspective. Isn’t it the same with the tragic conflicts that occupy our newsfeeds? These conflicts have their own Jets and Sharks, and, yes, a lack of an adult, future perspective. Can’t they see the better future which is within our collective grasp, if only they can cast aside their tribal perspectives?

That thought was soon trumped by another: the analogy is unfair. Some battles are worth fighting. For example, if we take no action against Islamic State, we shouldn’t be surprised if there’s an ever worse spate of summary beheadings, forced conversions, women being driven into servitude roles in societies all over the middle east, and terrorist strikes throughout the wider world.

But still… isn’t it worth considering possible technological, technoprogressive, or transhumanist approaches to peace?

  • After all, we say that technology changes everything. History is the story of the continual invention and enhancement of tools, machines, and devices of numerous sorts, which transform human experience in all fields of life.
  • Indeed, human progress has taken place by the discovery and mastery of engineering solutions – such as fire, the wheel, irrigation, sailing ships, writing, printing, the steam engine, electricity, domestic kitchen appliances, railways and automobiles, computers and the Internet, plastics, vaccinations, anaesthetic, contraception, and better hygiene.
  • What’s more, the rate of technological change is increasing, as larger numbers of engineers, scientists, designers, and entrepreneurs from around the globe participate in a rich online network exchange of ideas and information. Forthcoming technological improvements can propel human experience onto an even higher plane – with our minds and bodies both being dramatically enhanced.
  • So shouldn’t the further development of technology give us more options to achieve lasting resolution of global flashpoints?

Event previewTherefore I have arranged an online hangout discussion meeting: Global flashpoints: what do transhumanists have to say? This will be taking place at 7pm UK time this Sunday, 31st August. The corresponding YouTube video page (for people who prefer not to log into Google+ in order to view the Hangout that way) is here. I’ll be joined in this discussion by a number of thinkers from different transhumanist perspectives, based around Europe.

I’ve put a plaintive note on the meeting invite:

In our discussion, we’ll try to transcend the barbs and scape-goating that fills so much of existing online discussion about Iraq/Syria/Ukraine/Gaza/etc.

I honestly don’t know how the discussion is going to unfold. But here are some possible lines of argument:

  1. Consider the flashpoint in Ferguson, Missouri, after the shooting dead of teenager Michael Brown. That particular conflict arose, in part, because of disputes over what actually happened at the time of the shooting. But if the police in Ferguson had all been wearing and operating personal surveillance cameras,  then perhaps a lot of the heat would have gone out of the issue. That would be one example of taking advantage of recent improvements in technology in order to defuse a potential conflict hotspot
  2. Much conflict is driven by people feeling a sense of profound alienation from mainstream culture. Disaffected youths from all over Europe are leaving their families behind to travel to support fundamentalist Islamic causes in the middle east. They need a much better vision of the future, to reduce the chance that they will fall prey to these particular mind viruses. Could social futurism, technoprogressivism, and transhumanism offer that alternative vision?
  3. Rather than technology helping to create peace, there’s a major risk it will help to worsen conflicts. Powerful arsenals in the hands of malcontents are likely to have a more horrific impact nowadays – and an even worse one in the near future – than corresponding weaponry had in the past. Think also of the propaganda value of Islamic State execution videos distributed via YouTube – that kind of effect was unthinkable just a decade ago.

Existential ThreatOf these three lines of discussion, I am most persuaded by the third one. The implications are as follows. The message that we social futurists and transhumanists should be highlighting, in response to these outrages is, sadly, “You ain’t seen nothing yet”. There are actually existential risks that will deserve very serious collective action, in order to solve. In that case, it’s even more imperative that the global community gets its act together, and finds a more effective way to resolve the conflicts in our midst.

At the same time, we do need to emphasise the positive vision of where the world could reach in, say, just a few decades: a world with enormous abundance, fuelled by new technologies (nanotech, solar energy, rejuvenation biotech, ubiquitous smart robots) – a world that will transcend the aspirations of all existing ideologies. If we can make the path to this future more credible, there’s good reason to hope that people all over the world will set aside their previous war-like tendencies, tribal loyalties, and dark age mythologies.

 

30 January 2014

A brilliant example of communication about science and humanity

Mathematical Universe

Do you enjoy great detective puzzles? Do you like noticing small anomalies, and turning them into clues to an unexpected explanation? Do you like watching world-class scientists at work, piecing together insights to create new theories, and coping with disappointments when their theories appear to be disproved?

In the book “Our mathematical universe”, the mysteries being addressed are some of the very biggest imaginable:

  • What is everything made out of?
  • Where does the universe come from? For example, what made the Big Bang go “bang”?
  • What gives science its authority to speak with so much confidence about matters such as the age and size of the universe?
  • Is it true that the constants of nature appear remarkably “fine-tuned” so as to allow the emergence of life – in a way suggesting a miracle?
  • What does modern physics (including quantum mechanics) have to teach us about mind and consciousness?
  • What are the chances of other intelligent life existing in our galaxy (or even elsewhere in our universe)?
  • What lies in the future of the human race?

The author, Max Tegmark, is a Swedish-born professor of physics at MIT. He’s made a host of significant contributions to the development of cosmology – some of which you can read about in the book. But in his book, he also shows himself in my view to be a first class philosopher and a first class communicator.

Indeed, this may be the best book on the philosophy of physics that I have ever read. It also has important implications for the future of humanity.

There are some very big ideas in the book. It gives reasons for believing that our universe exists alongside no fewer than four different types of parallel universes. The “level 4 multiverse” is probably one of the grandest conceptions in all of philosophy. (What’s more, I’m inclined to think it’s the correct description of reality. At its heart, despite its grandness, it’s actually a very simple theory, which is a big plus in its favour.)

Much of the time, the writing in the book is accessible to people with pre-university level knowledge of science. On occasion, the going gets harder, but readers should be able to skip over these sections. I recommend reading the book all the way through, since the last chapter contains many profound ideas.

I think you’ll like this book if:

  • You have a fondness for pure mathematics
  • You recognise that the scientific explanation of phenomenon can be every bit as uplifting as pre-scientific supernatural explanations
  • You are ready to marvel at the ingenuity of scientific investigators going all the way back to the ancient Greeks (including those who first measured the distance from the Earth to the Sun)
  • You are critical of “quantum woo woo” hand-waving that says that quantum mechanics proves that consciousness is somehow a non-local agent (and that minds will survive bodily death)
  • You want to find more about Hugh Everett, the physicist who first proposed that “the quantum wave function never collapses”
  • You have a hunch that there’s a good answer to the question “why is there something rather than nothing?”
  • You want to see scientists in action, when they are confronted by evidence that their favoured theories are disproved by experiment
  • You’re ready to laugh at the misadventures that a modern cosmologist experiences (including eminent professors falling asleep in the audience of his lectures)
  • You’re interested in the considered viewpoint of a leading scientist about matters of human existential risk, including nuclear wars and the technological singularity.

Even more than all these good reasons, I highlight this book as an example of what the world badly needs: clear, engaging advocacy of the methods of science and reason, as opposed to mysticism and obscurantism.

Footnote: For my own views about the meaning of quantum mechanics, see my earlier blogpost “Schrödinger’s Rabbits”.

26 September 2013

Risk blindness and the forthcoming energy crash

Filed under: books, carbon, chaos, climate change, Economics, irrationality, politics, risks, solar energy — David Wood @ 11:28 am

‘Logical’ is the last thing human thinking, individual and collective, is. Too compelling an argument can even drive people with a particularly well-insulated belief system deeper into denial.

JL in Japan 2The Energy of Nations: Risk Blindness and the Road to Renaissance, by Jeremy Leggett, is full of vividly quotable aphorisms – such as the one I’ve just cited. I see Jeremy as one of the world’s leading thinkers on solar energy, oil depletion, climate change, and the dysfunctional ways in which investment all-too-frequently works. The Observer has described him as “Britain’s most respected green energy boss”. A glance at his CV shows an impressive range of accomplishments:

Jeremy Leggett is founder and chairman of Solarcentury, the UK’s fastest growing renewable energy company since 2000, and founder and chairman of SolarAid, an African solar lighting charity set up with 5% of Solarcentury’s annual profits and itself parent to a social venture, SunnyMoney, that is the top-selling retailer of solar lights in Africa.

Jeremy has been a CNN Principal Voice, and an Entrepreneur of the Year at the New Energy Awards. He was the first Hillary Laureate for International Leadership on Climate Change, chairs the financial-sector think-tank Carbon Tracker and is a consultant on systemic risk to large corporations. He writes and blogs on occasion for the Guardian and the Financial Times, lectures on short courses in business and society at the universities of Cambridge and St Gallen, and is an Associate Fellow at Oxford University’s Environmental Change Institute.

On his own website, The triple crunch log, Jeremy has the following to say about himself:

This log covers the energy-, climate-, and financial crises, and issues pertinent to society’s response to this “triple crunch”…

Let me explain why am I worried about oil depletion, climate change, and dysfunctional investment.

I researched earth history for 14 years, and so know a bit about what makes up the climate system. I researched oil source rocks for several of those years, funded by BP and Shell among others, and I explored for oil and gas in the Middle East and Asia, so I have a background in the issues relevant to peak oil. And more recently I have been a clean-energy entrepreneur and investor for more than decade, as founder of a solar energy company and founding director of a Swiss venture capital fund, so I have seen how the capital markets operate close to. That experience is the basis for my concerns…

Many of the critics who comment on my blogs urge readers to discount everything I say because I am trying to sell solar energy, and so therefore must be in it for the money, hyping concerns about climate change and peak oil in the cause of self enrichment. (As you would). They have it completely the wrong way round.

I left a lucrative career consulting for the oil industry, and teaching its technicians, because I was concerned about global warming and wanted to act on that concern. I joined Greenpeace (1989), on a fraction of my former income, to campaign for clean energy. I left Greenpeace (1997) to set up a non-profit organisation campaigning for clean energy. I turned it into a for-profit company (1999) because I came to the view that was the best possible way I could campaign for clean energy – by creating a commercial success that could show the way. The company I set up gives 5% of its operating profit to a charity that also campaigns for clean energy, SolarAid. All that said, I hope Solarcentury makes a lot of money. It won’t have succeeded in its mission if it doesn’t. I’m hoping fewer people will still want to discount my arguments, knowing the history.

Today marks the UK availability of his book, The Energy of Nations. Heeding its own advice, quoted above, that there are drawbacks to presenting arguments in an overly rational or compelling format, the book proceeds down a parallel course. A large part of the book reads more like a novel than a textbook, with numerous fascinating episodes retold from Jeremy’s diaries.

937893A1-06FA-4829-B09E-599DEFDC1C7F

The cast of characters that have walk-on parts in these episodes include prime ministers, oil industry titans, leading bankers, journalists, civil servants, analysts, and many others. Heroes and villains appear and re-appear, sometimes grown wiser with the passage of years, but sometimes remaining as recalcitrant, sinister (yes), and slippery (yes again) as ever.

A core theme of the book is risk blindness. Powerful vested interests in society have their own reasons to persuade public opinion that there’s nothing to worry about – that everything is under control. Resources at the disposal of these interests (“the incumbency”) inflict a perverse blindness on society, as regards the risks of the status quo. Speaking against the motion at a debate, This House Believes Peak Oil Is No Longer a Concern, in London’s Queen Elizabeth II Congress Centre in March 2009, in the aftermath of the global financial crisis brought on by hugely unwarranted over-confidence among bankers, Jeremy left a trenchant analogy hanging in the mind of the audience:

I explain that those of us who worry about peak oil fear that the oil industry has lapsed into a culture of over-exuberance about both the remaining oil reserves and prospects of resources yet to be turned into reserves, and about the industry’s ability to deliver capacity to the market even if enough resources exist.

Our main argument is that new capacity flows coming onstream from discoveries made by the oil industry over the past decade don’t compensate for depletion. Hence projections of demand cannot be met a few years hence. This problem will be compounded by other issues, including the accelerating depletion of the many old oilfields that prop up much of global oil production today, the probable exaggeration by OPEC countries of their reserves, and the failure of the ‘price-mechanism’ assumption that higher prices will lead to increased exploration and expanding discoveries…

In conclusion, this debate is all about the risk of a mighty global industry having its asset assessment systemically overstated, due to an endemic culture of over-optimism, with potentially ruinous economic implications.

I pause to let that sentence hang in the air for a second or two.

Now that couldn’t possibly happen, could it?

This none too subtle allusion to the disaster playing out in the financial sector elicits a polite laugh from the audience…

Nowadays, people frequently say that the onset of shale oil and gas should dissolve fears about impending reductions in the availability of oil. Jeremy sees this view as profoundly misguided. Shale is likely to fall far, far short of the expectations that have been heaped on it:

For many, the explosive growth of shale gas production in the USA – now extending into oil from shale, or ‘tight oil’ as it is properly known – is a revolution, a game-changer, and it even heralds a ‘new era of fossil fuels’. For a minority, it shows all the signs of being the next bubble in the markets.

In the incumbency’s widely held view, the US shale gas phenomenon can be exported, opening the way to cheap gas in multiple countries. For others, even if there is no bubble, the phenomenon is not particularly exportable, for a range of environmental, economic and political reasons

This risk too entails shock potential. Take a country like the UK. Its Treasury wishes actively to suppress renewables, so as to ensure that investors won’t be deterred from bankrolling the conversion of the UK into a ‘gas hub’. Picture the scene if most of the national energy eggs are put in that basket, infrastructure is capitalised, and then supplies of cheap gas fall far short of requirement, or even fail to materialise.

As the book makes clear, our collective risk blindness prevents society as a whole from reaching a candid appraisal of no fewer than five major risks facing us over the next few years: oil shock, climate shock, a further crash in the global financial system, the bursting of a carbon bubble in the capital markets, and the crash of the shale gas boom. The emphasis on human irrationality gels with a lot of my own prior reading – as I’ve covered e.g. in Our own entrenched enemies of reasonAnimal spirits – a richer understanding of economics, Influencer – the power to change anything, as well as in my most recent posting When faith gets in the way of progress.

The book concludes with a prediction that society is very likely to encounter, by as early as 2015, either a dramatic oil shock (the widespread realisation that the era of cheap oil is behind us, and that the oil industry has misled us as badly as did the sellers of financial hocus pocus), or a renewed financial crisis, which would then precipitate (but perhaps more slowly) the same oil shock. To that extent, the book is deeply pessimistic.

But there is plenty of optimism in the book too. The author believes – as do I – that provided suitable preparatory steps are taken (as soon as possible), society ought to be able to rebound from the forthcoming crash. He spends time explaining “five premises for the Road to Renaissance”:

  1. The readiness of clean energy for explosive growth
  2. The intrinsic pro-social attributes of clean energy
  3. The increasing evidence of people power in the world
  4. The pro-social tendencies in the human mind
  5. The power of context that leaders will be operating in after the oil crash.

But alongside his optimism, he issues a sharp warning:

I do not pretend that things won’t get much worse before they get better. There will be rioting. There will be food kitchens. There will be blood. There already have been, after the financial crash of 2008. But the next time round will be much worse. In the chaos, we could lose our way like the Maya did.

In summary, it’s a profoundly important book. I found it to be a real pleasure to read, even though the topic is nerve-racking. I burst out laughing in a number of places, and then reflected that it was nervous laughter.

The book is full of material that will probably make you want to underline it or tweet an extract online. The momentum builds up to a dramatic conclusion. Anyone concerned about the future should make time to read it.

Not everyone will agree with everything it contains, but it is clearly an honest and heartfelt contribution to vital debates. The book has already been receiving some terrific reviews from an interesting variety of people. You can see those, a summary, Chapter One, and links for buying the book here.

Finally, it’s a book that is designed to provoke discussion. I’m delighted that the author has agreed to speak at a London Futurists event on Saturday 5th October. Please click here for more details and to RSVP. This is a first class topic addressed by a first class speaker, which deserves a first class audience to match!

21 March 2013

The burning need for better supra-national governance

International organisations have a bad reputation these days. The United Nations is widely seen as ineffective. There’s a retreat towards “localism”: within Britain, the EU is unpopular; within Scotland, Britain is unpopular. And any talk of “giving up sovereignty” is deeply unpopular.

However, lack of effective international organisations and supra-national governance is arguably the root cause of many of the biggest crises facing humanity in the early 21st century.

That was the thesis which Ian Goldin, Oxford University Professor of Globalisation and Development, very ably shared yesterday evening in the Hong Kong Theatre in the London School of Economics. He was quietly spoken, but his points hit home strongly. I was persuaded.

DividedNationsThe lecture was entitled Divided Nations: Why global governance is failing and what we can do about it. It coincided with the launch of a book with the same name. For more details of the book, see this blogpost on the website of the Oxford Martin School, where Ian Goldin holds the role of Director.

It’s my perception that many technology enthusiasts, futurists, and singularitarians have a blind spot when it comes to the topic of the dysfunction of current international organisations. They tend to assume that technological improvements will automatically resolve the crises and risks facing society. Governments and regulators should ideally leave things well alone – so the plea goes.

My own view is that smarter coordination and regulation is definitely needed – even though it will be hard to set that up. Professor Goldin’s lecture amply reinforced that view.

On the train home from the lecture, I downloaded the book onto my Kindle. I recommend anyone who is serious about the future of humanity to read it. Drawing upon the assembled insights and wisdom of the remarkable set of scholars at the Oxford Martin School, in addition to his own extensive experience in the international scene, Professor Goldin has crystallised state-of-the-art knowledge regarding the pressing urgency, and options, for better supra-national governance.

In the remainder of this blogpost, I share some of the state-of-consciousness notes that I typed while listening to the lecture. Hopefully this will give a flavour of the hugely important topics covered. I apologise in advance for any errors introduced in transcription. Please see the book itself for an authoritative voice. See also the live tweet stream for the meeting, with the hash-tag #LSEGoldin.

What keeps Oxford Martin scholars awake at night

The fear that no one is listening. The international governance system is in total gridlock. There are failures on several levels:

  • Failure of governments to lift themselves to a higher level, instead of being pre-occupied by local, parochial interests
  • Failure of electorates to demand more from their governments
  • Failure of governments for not giving clearer direction to the international institutions.

Progress with international connectivity

80 countries became democratic in the 1990s. Only one country in the world today remains disconnected – North Korea.

Over the last few decades, the total global population has increased, but the numbers in absolute poverty have decreased. This has never happened before in history.

So there are many good aspects to the increase in the economy and inter-connectivity.

However, economists failed to think sufficiently far ahead.

What economists should have thought about: the global commons

What was rational for the individuals and for national governments was not rational for the whole world.

Similar problems exist in several other fields: antibiotic resistance, global warming, the markets. He’ll get to these shortly.

The tragedy of the commons is that, when everyone does what is rational for them, everyone nevertheless ends up suffering. The common resource is not managed.

The pursuit of profits is a good thing – it has worked much better than central planning. But the result is irrationality in aggregate.

The market alone cannot provide a response to resource allocation. Individual governments cannot provide a solution either. A globally coordinated approach is needed.

Example of several countries drawing water from the Aral Sea – which is now arid.

That’s what happens when nations do the right thing for themselves.

The special case of Finance

Finance is by far the most sophisticated of the resource management systems:

  • The best graduates go into the treasury, the federal reserve, etc
  • They are best endowed – the elite organisation
  • These people know each other – they play golf together.

If even the financial bodies can’t understand their own system, this has black implications for other systems.

The growth of the financial markets had two underbellies:

  1. Growing inequality
  2. Growing potential for systemic risk

The growing inequality has actually led to lobbying that exaggerates inequality even more.

The result was a “Race to the bottom”, with governments being persuaded to get out of the regulation of things that actually did need to be regulated.

Speaking after the crisis, Hank Paulson, US Treasury Secretary and former CEO of Goldman Sachs, in effect said “we just did not understand what was happening” – even with all the high-calibre people and advice available to him. That’s a shocking indictment.

The need for regulation

Globalisation requires regulation, not just at the individual national level, but at an international level.

Global organisations are weaker now than in the 1990s.

Nations are becoming more parochial – the examples of UK (thinking of leaving EU) and Scotland (thinking of leaving UK) are mirrored elsewhere too.

Yes, integration brings issues that are hard to control, but the response to withdraw from integration is terribly misguided.

We cannot put back the walls. Trying to withdraw into local politics is dreadfully misguided.

Five examples

His book has five examples as illustrations of his general theme (and that’s without talking in this book about poverty, or nuclear threats):

  1. Finance
  2. Pandemics
  3. Migration
  4. Climate change
  5. Cyber-security

Many of these problems arise from the success of globalisation – the extraordinary rise in incomes worldwide in the last 25 years.

Pandemics require supra-national attention, because of increased connectivity:

  • The rapid spread of swine flu was correlated tightly with aircraft travel.
  • It will just take 2 days for a new infectious disease to travel all the way round the world.

The idea that you can isolate yourself from the world is a myth. There’s little point having a quarantine regime in place in Oxford if a disease is allowed to flourish in London. The same applies between countries, too.

Technology developments exacerbate the problem. DNA analysis is a good thing, but the capacity to synthesise diseases has terrible consequences:

  • There’s a growing power for even a very small number of individuals to cause global chaos, e.g. via pathogens
  • Think of something like Waco Texas – people who are fanatical Armageddonists – but with greater technical skills.

Cyber-security issues arise from the incredible growth in network connectivity. Jonathan Zittrain talks about “The end of the Internet”:

  • The Internet is not governed by governments
  • Problems to prosecute people, even when we know who they are and where they are (but in a different jurisdiction)
  • Individuals and small groups could destabilise whole Internet.

Migration is another “orphan issue”. No international organisation has the authority to deal with it:

  • Control over immigration is, in effect, an anarchic, bullying system
  • We have very bad data on migration (even in the UK).

The existing global institutions

The global institutions that we have were a response to post-WW2 threats.

For a while, these institutions did well. The World Bank = Bank for reconstruction. It did lead a lot of reconstruction.

But over time, we became complacent. The institutions became out-dated and lost their vitality.

The recent financial crisis shows that the tables have been turned round: incredible scene of EU taking its begging bowl to China.

The tragedy is that the lessons well-known inside the existing institutions have not been learned. There are lessons about the required sequencing of reforms, etc. But with the loss of vitality of these institutions, the knowledge is being lost.

The EU has very little bandwidth for managing global affairs. Same as US. Same as Japan. They’re all preoccupied by local issues.

The influence of the old G7 is in decline. The new powers are not yet ready to take over the responsibility: China, Russia, India, Indonesia, Brazil, South Africa…

  • The new powers don’t actually want this responsibility (different reasons for different countries)
  • China, the most important of the new powers, has other priorities – managing their own poverty issues at home.

The result is that no radical reform happens, of the international institutions:

  • No organisations are killed off
  • No new ones created
  • No new operating principles are agreed.

Therefore the institutions remain ineffective. Look at the lack of meaningful progress towards solving the problems of climate change.

He has been on two Bretton Woods reform commissions, along with “lots of wonderfully smart, well-meaning people”. Four prime ministers were involved, including Gordon Brown. Kofi Annan received the report with good intentions. But no actual reform of UN took place. Governments actually want these institutions to remain weak. They don’t want to give up their power.

It’s similar to the way that the UK is unwilling to give up power to Brussels.

Sleep-walking

The financial crisis shows what happens when global systems aren’t managed:

  • Downwards spiral
  • Very hard to pull it out afterwards.

We are sleep-walking into global crises. The financial crisis is just a foretaste of what is to come. However, this need not be the case.

A positive note

He’ll finish the lecture by trying to be cheerful.

Action on global issues requires collective action by both citizens and leaders who are not afraid to relinquish power.

The good news:

  • Citizens are more connected than ever before
  • Ideologies that have divided people in the past are reducing in power
  • We can take advantage of the amplification of damage to reputation that can happen on the Internet
  • People can be rapidly mobilised to overturn bad legislation.

Encouraging example of SOPA debate in US about aspects of control of the Internet:

  • 80 million people went online to show their views, in just two days
  • Senate changed their intent within six hours.

Some good examples where international coordination works

  • International plane travel coordination (air traffic control) is example that works very well – it’s a robust system
  • Another good example: the international postal system.

What distinguishes the successes from the failures:

  • In the Air Traffic Control case, no one has a different interest
  • But in other cases, there are lots of vested interest – neutering the effectiveness of e.g. the international response to the Syrian crisis
  • Another troubling failure example is what happened in Iraq – it was a travesty of what the international system wanted and needed.

Government leaders are afraid that electorate aren’t ready to take a truly international perspective. To be internationalist in political circles is increasingly unfashionable. So we need to change public opinion first.

Like-minded citizens need to cooperate, building a growing circle of legitimacy. Don’t wait for the global system to play catch-up.

In the meantime, true political leaders should find some incremental steps, and should avoid excuse of global inaction.

Sadly, political leaders are often tied up addressing short-term crises, but these short-term crises are due to no-one satisfactorily addressing the longer-term issues. With inaction on the international issues, the short-term crises will actually get worse.

Avoiding the perfect storm

The scenario we face for the next 15-20 years is “perfect storm with no captain”.

He calls for a “Manhattan project” for supra-national governance. His book is a contribution to initiating such a project.

He supports the subsidiarity principle: decisions should be taken at the most local level possible. Due to hyper-globalisation, there are fewer and fewer things that it makes sense to control at the national level.

Loss of national sovereignty is inevitable. We can have better sovereignty at the global level – and we can influence how that works.

The calibre of leaders

Example of leader who consistently took a global perspective: Nelson Mandela. “Unfortunately we don’t have many Mandelas around.”

Do leaders owe their power bases with electorates because they are parochial? The prevailing wisdom is that national leaders have to shy away from taking a global perspective. But the electorate actually have more wisdom. They know the financial crisis wasn’t just due to bankers in Canary Wharf having overly large bonuses. They know the problems are globally systemic in nature, and need global approaches to fix them.

ian goldin

Older Posts »

Blog at WordPress.com.