dw2

6 April 2025

Choose coordination not chaos

Filed under: Abundance, AGI, chaos, collaboration — Tags: , , — David Wood @ 10:46 pm

Note: this document is subject to change as more feedback is received. Check back for updates later. This is version 1.1c.

Preamble

A critically important task in the coming months is to inspire a growing wave of people worldwide to campaign for effective practical coordination of the governance of advanced AI. That’s as an alternative to leaving the development and deployment of advanced AI to follow its present chaotic trajectory.

The messages that need to be conveyed, understood, and acted upon, are that

  • The successful governance of advanced AI will result in profound benefit for everyone, whereas a continuation of the present chaotic state of affairs risks global catastrophe
  • The successful governance of advanced AI isn’t some impossible dream, but lies within humanity’s grasp
  • Nevertheless, real effort, real intelligence, and, yes, real coordination will be needed, so humanity can reach a world of unprecedented abundance, rather than sleepwalk into disaster.

What would have great value is a campaign slogan that conveys the above insights, and which is uplifting, unifying, easily understood, and forward-looking. The right slogan would go viral, and would galvanise people in all walks of life to take positive action.

To that end, that slogan should ideally be

  • Memorable and punchy
  • Emotionally resonant
  • Credibly audacious
  • Universally understandable
  • Open-ended enough to invite participation

To be clear, that slogan should not cause panic or despair, but should put people into a serious frame of mind.

A specific proposal

After several days of online brainstorming, in which numerous ideas were presented and discussed, I now present what I consider the best option so far.

Choose coordination not chaos, so AI brings abundance for all

If there is a need for a three-word version of this, options include

  • Coordination not chaos
  • Choose AI abundance

Of course, what’s needed isn’t just a standalone slogan. Therefore, please find here also a sample illustrative image and, most important, a set of talking points to round out the concept.

About the image

Here’s the commentary by ChatGPT, when asked to suggest an image to illustrate this campaign slogan:

Concept: Two contrasting futures from a single branching path.

Scene:

  • A wide landscape split into two diverging paths or realms — one vibrant and coordinated, the other chaotic and fragmented.
  • In the coordinated half:
    • A harmonious world — sustainable cities, diverse communities collaborating with AI, lush green spaces, clean tech, and open exchanges of knowledge and creativity.
    • Subtle signs of AI woven into infrastructure: responsive lighting, robotic assistants, AI-powered transport.
  • In the chaotic half:
    • A fractured world — disconnected enclaves, pollution, conflict, neglected tech, and isolated individuals overwhelmed by noise or misinformation.
    • AI appears uncontrolled — surveillance drones, malfunctioning robots, or broken screens.

Central focus:

  • A group of people at the fork in the path, pointing and stepping toward the coordinated future, with calm, confident AI assistants guiding the way.

(Aside: although the actual image produced arguably needs more work, the concept described by ChatGPT is good. And it’s reassuring that the slogan, by itself, produced a flow of ideas resonant with the intended effect.)

Talking points

The talking points condensed to a single slide:

And now in more detail:

1. Humanity’s superpower: coordination

Humanity’s most important skill is sometimes said to be our intelligence – our ability to understand, and to make plans in order to achieve specific outcomes.

But another skill that’s at least as important is our ability to coordinate, that is, our ability to:

  • Share insights with each other
  • Operate in teams where people have different skills
  • Avoid needless conflict
  • Make and uphold agreements
  • Accept individual constraints on our action, with the expectation of experiencing greater freedom overall.

Coordination may be informal or formal. It can be backed up by shared narratives and philosophies, by legal systems, by the operation of free markets, by councils of elders, and by specific bodies set up to oversee activities at local, regional, or international levels.

Here are some examples of types of agreements on individual constraints for shared mutual benefit:

  • Speed limits for cars, to reduce the likelihood of dangerous accidents
  • Limits on how much alcohol someone can drink before taking charge of a car
  • Requirements to maintain good hygiene during food preparation
  • Requirements to assess the safety of a new pharmaceutical before deploying it widely
  • Prohibitions against advertising that misleads consumers into buying faulty goods
  • Rules preventing over-fishing, or the overuse of shared “commons” resources
  • Rules of various sports and games – and agreed sanctions on any cheaters
  • Prohibitions against politicians misleading parliaments – and agreed sanctions on any cheaters
  • Prohibitions against the abuse of children
  • Rules governing the conduct of soldiers – which apply even in times of war
  • Restrictions on the disposal of waste
  • Rules governing ownership of dangerous breeds of dog
  • Rules governing the spread of dangerous materials, such as biohazards

Note that coordination is often encouraging rather than restrictive. This includes

  • Prizes and other explicit incentives
  • Implicit rewards for people with good reputation
  • Market success for people with good products and services

The fact that specific coordination rules and frameworks have their critics doesn’t mean that the whole concept of coordination should be rejected. It just means that we need to keep revising our coordination processes. That is, we need to become better at coordinating.

2. Choosing coordination, before chaos ensues

When humanity uncovers new opportunities, it can take some time to understand the implications and to create or update the appropriate coordination rules and frameworks for these opportunities:

  • When settlers on the island of Mauritius discovered the dodo – a large, flightless bird – they failed to put in place measures to prevent that bird becoming extinct only a few decades later
  • When physicists discovered radioactivity, it took some time to establish processes to reduce the likelihood that researchers would develop cancer due to overexposure to dangerous substances
  • Various new weapons (such as chemical gases) were at first widely used in battle zones, before implicit and then explicit agreement was reached not to use such weapons
  • Surreptitious new doping methods used by athletes to gain extra physical advantage result, eventually, in updates to rules on monitoring and testing
  • Tobacco was widely used – and even encouraged, sometimes by medical professionals – before society decided to discourage its use (against the efforts of a formidable industry)
  • Similar measures are now being adopted, arguably too slowly, against highly addictive food products that are thought to cause significant health problems
  • New apps and online services which spread hate speech and other destabilising misinformation surely need some rules and restrictions too, though there is considerable debate over what form of governance is needed.

However, if appropriate coordination is too slow to be established, or is too weak, or exists in words only (without the backup of meaningful action against rules violators), the result can be chaos:

  • Rare animals are hunted to extinction
  • Fishing stocks are depleted to the extent that the livelihood of fishermen is destroyed
  • Economic transactions have adverse negative externalities on third parties
  • Dangerous materials, such as microplastics, spread widely in the environment
  • No-one is sure what rules apply in sports, and which rules will be enforced
  • Normal judiciary processes are subverted in favour of arbitrary “rule of the in-group”
  • Freedoms previously enjoyed by innovative new start-ups are squelched by the so-called “crony capitalism” of monopolies and cartels linked to the ruling political regime
  • Literal arms races take place, with ever-more formidable weapons being rushed into use
  • Similar races take place to bring new products to market without adequate safety testing

Groups of people who are (temporarily) faring well from the absence of restraints on their action are likely to oppose rules that alter their behaviour. That’s the experience of nearly every industry whose products or services were discovered to have dangerous side-effects, but where insiders fought hard to suppress the evidence of these dangers.

Accordingly, coordination does not arise by default. It needs explicit choice, backed up by compelling analysis, community engagement, and strong enforcement.

3. Advanced AI: the promise and the peril

AI could liberate humanity from many of our oldest problems.

Despite huge progress of many kinds over the centuries, humans still often suffer grievously on account of various aspects of our nature, our environment, our social norms, and our prevailing philosophies. Specifically, we are captive to

  • Physical decline and aging
  • Individual and collective mental blindspots and cognitive biases (“stupidity”)
  • Dysfunctional emotions that render us egotistical, depressed, obsessive, and alienated
  • Deep psychosocial tendencies toward divisiveness, xenophobia, deception, and the abuse of power

However, if developed and used wisely, advanced AI can enable rejuvenation and enhancement of our bodies, minds, emotions, social relations, and our links to the environment (including the wider cosmos):

  • AI can accelerate progress with nanotech, biotech, and cognotech
  • In turn, these platform technologies can accelerate progress with abundant low-cost clean energy, nutritious food, healthcare, education, security, creativity, spirituality, and the exploration of marvellous inner and outer worlds

In other words, if developed and used wisely, advanced AI can set humanity free to enjoy much better qualities of life:

However, if developed and used unwisely, advanced AI is likely to cause catastrophe:

  • Via misuse by people who are angry, alienated, or frustrated
  • Via careless use by people who are naive, overconfident, or reckless
  • Via AI operating beyond our understanding and control
  • Via autonomous AI adopting alien modes of rationality and alien codes of ethics

The key difference between these two future scenarios is whether the development and use of AI is wisely steered, or instead follows a default path of deprioritising any concerns about safety:

  • The default path involves AI whose operation is opaque, which behaves deceptively, which lacks moral compass, which can be assigned to all kinds of tasks with destructive side-effects, and which often disregards human intentions
  • Instead, if AI is wisely harnessed, it will deliver value as a tool, but without any intrinsic agency, autonomy, volition, or consciousness
  • Such a tool can have high creativity, but won’t use that creativity for purposes opposed to human wellbeing

To be clear, there is no value in winning a reckless race to be the first to create AI with landmark new features of capability and agency. Such a race is a race to oblivion, also known as a suicide race.

4. The particular hazards of advanced AI

The dangers posed by AI don’t arise from AI in isolation. They involve AI in the hands of fallible, naïve, over-optimistic humans, who are sometimes driven by horrible internal demons. It’s AI summoned and used, not by the better angels of human nature, but by the darker corners of our psychology.

Although we humans are often wonderful, we sometimes do dreadful things to each other – especially when we have become angry, alienated, or frustrated. Add in spiteful ideologies of resentment and hostility, and things can become even uglier.

Placing technology in the hands of people in their worst moments can lead to horrific outcomes. The more powerful the technology, the bigger the potential abomination:

  • The carnage of a frenzied knife attack or a mass shooting (where the technology in question ranges from a deadly sharp knife to an automatic rifle)
  • The chaos when motor vehicles are deliberately propelled at speed into crowds of innocent pedestrians
  • The deaths of everyone on board an airplane, when a depressed air pilot ploughs the craft into a mountainside or deep into an ocean, in a final gesture of defiance to what they see as an unfair, uncaring world
  • The destruction of iconic buildings of a perceived “great satan”, when religious fanatics have commandeered jet airliners in service of the mental pathogen that has taken over their minds
  • The assassination of political or dynastic rivals, by the mixing of biochemicals that are individually harmless, but which in combination are frightfully lethal
  • The mass poisoning of commuters in a city subway, when deadly chemicals are released at the command of a cult leader who fancies himself as the rightful emperor of Japan, and who has beguiled clearly intelligent followers to trust his every word.

How does advanced AI change this pattern of unpleasant possibilities? How is AI a significantly greater threat than earlier technologies? In six ways:

  1. As AI-fuelled automation displaces more people from their work (often to their surprise and shock), it predisposes more people to become bitter and resentful
  2. AI is utilised by merchants of the outrage industrial complex, to convince large numbers of people that their personal circumstance is more appalling than they had previously understood, that a contemptible group of people over there are responsible for this dismal turn of events, and that the appropriate response is to utterly defeat those deplorables
  3. Once people are set on a path to obtain revenge, personal recognition, or just plain pandemonium, AIs can make it much easier for them to access and deploy weapons of mass intimidation and mass destruction
  4. Due to the opaque, inscrutable nature of many AI systems, the actual result of an intended outrage may be considerably worse even than what the perpetrator had in mind; this is similar to how malware sometimes causes much more turmoil than the originator of that malware intended
  5. An AI with sufficient commitment to the goals it has been given will use all its intelligence to avoid being switched off or redirected; this multiplies the possibility that an intended local outrage might spiral into an actual global catastrophe
  6. An attack powered by fast-evolving AI can strike unexpectedly at core aspects of the infrastructure of human civilization – our shared biology, our financial systems, our information networks, or our hair-trigger weaponry – exploiting any of the numerous fragilities in these systems.

And it’s not just missteps from angry, alienated, frustrated people, that we have to worry about. We also need to beware potential cascades of trouble triggered by the careless actions of people who are well-intentioned, but naive, over-optimistic, or simply reckless, in how they use AI.

The more powerful the AI, the greater the dangers.

Finally, the unpredictable nature of emergent intelligence carries with it another fearsome possibility. Namely, a general intelligence with alien thinking modes far beyond our own understanding, might decide to adopt an alien set of ethics, in which the wellbeing of eight billion humans merits only a miniscule consideration.

That’s the argument against simply following a default path of “generate more intelligence, and trust that the outcome is likely to be beneficial for humanity”. It’s an argument that should make everyone pause for thought.

5. A matter of real urgency

How urgent is the task of improving global coordination of the governance of advanced AI?

It is sometimes suggested that progress with advanced AI is slowing down, or is hitting some kind of “wall” or other performance limit. There may be new bottlenecks ahead. Or diseconomies of scale may supersede the phenomenon of economies of scale which has characterised AI research over the last few years.

However, despite these possibilities, the case remains urgent:

  • Even if one approach to improving AI runs out of steam, huge numbers of researchers are experimenting with promising new approaches, including approaches that combine current state-of-the-art methods into new architectures
  • Even if AI stops improving, it is already dangerous enough to risk incidents in which large numbers of people are harmed
  • Even if AI stops improving, clever engineers will find ways to take better advantage of it – thereby further increasing the risks arising, if it is badly configured or manifests unexpected behaviour
  • There is no guarantee that AI will actually stop improving; making that assumption is too much of a risk to take on behalf of the entirety of human civilisation
  • Even if it will take a decade or longer for AI to reach a state in which it poses true risks of global catastrophe, it may also take decades for governance systems to become effective and practical; the lesson from ineffective efforts to prevent runaway climate change are by no means encouraging here
  • Even apart from the task of coordinating matters related to advanced AI, human civilisation faces other deep challenges that also require effective coordination on the global scale – coordination that, as mentioned, is currently failing on numerous grounds.

So, there’s an imperative to “choose coordination not chaos” independent of considering the question of whether advanced AI will lead to abundance or to a new dark age.

6. A promising start and an unfortunate regression

Humanity actually made a decent start in the direction of coordinating the development of advanced AI, at the Global AI Safety Summits in the UK (November 2023) and South Korea (May 2024).

Alas, the next summit in that series, in Paris (February 2025) was overtaken by political correctness, by administrivia, by virtue signalling, and, most of all, by people with a woefully impoverished understanding of the existential opportunities and risks of advanced AI. Evidently, the task of raising true awareness needs to be powerfully re-energised.

There’s still plenty of apparent global cooperation taking place – lots of discussions and conferences and summits, with people applauding the fine-sounding words in each other’s speeches. “Justice and fairness, yeah yeah yeah!” “Transparency and accountability, yeah yeah yeah!” “Apple pie and blockchain, yeah yeah yeah!” “Intergenerational intersectionality, yeah yeah yeah!”

But the problem is the collapse of effective, practical global cooperation, regarding the hard choices about which aspects of advanced AI should be promoted, and which should be restricted.

Numerous would-be coordination bodies are struggling with the same set of issues:

  • It’s much easier to signal virtue than to genuinely act virtuously.
  • Too many of the bureaucrats who run these bodies are out of their depth when it comes to understanding the existential opportunities and risks of advanced AI.
  • Seeing no prospect of meaningful coordination, many of the big tech companies invited to participate do so in a way that obfuscates the real issues while maintaining their public image as “trying their best to do good”.
  • The process is undermined by people who can be called “reckless accelerationists” – people who are willing to gamble that the chaotic processes of creating advanced AI as quickly as possible will somehow result in a safe, beneficial outcome (and, in some cases, these accelerationists would even take a brief perverted pleasure if humanity were rendered extinct by a non-sentient successor AI species); the accelerationists don’t want the public as a whole to be in any position to block their repugnant civilisational Russian roulette.

How to address this dilemma is arguably the question that should transcend all others, regarding the future of humanity.

7. Overcoming the obstacles to effective coordination of the governance of advanced AI

To avoid running aground on the same issues as in the past, it’s important to bear in mind the five main reasons for the failure, so far, of efforts to coordinate the governance of advanced AI. They are:

  • Fear that attempts to control the development of AI will lead to an impoverished future, or a future in which the world is controlled by people from a different nation (e.g. China)
  • Lack of appreciation of the grave perils of the current default chaotic course
  • A worry that any global coordination would lurch toward a global dictatorship, with its own undeniable risks of catastrophe
  • The misapprehension that, without the powers of a global dictatorship, any attempts at global coordination are bound to fail, so they are a waste of time
  • The power that Big Tech possesses, allowing it to ignore half-hearted democratic attempts to steer their activities.

In broad terms, these obstacles can be overcome as follows:

  • Emphasising the positive outcomes, including abundance, freedom, and all-round wellbeing – and avoiding the psychologically destabilising outlook of “AI doomers”
  • Increasing the credibility and relatability of scenarios in which ungoverned advanced AI leads to catastrophe – but also the credibility and relatability of scenarios in which humanity’s chaotic tendencies can be overcome
  • Highlighting previous examples when the governance of breakthrough technology was at least partially successful, rather than developers being able to run amok – examples such as genetic recombination therapies, nuclear proliferation, and alternatives to the chemicals that caused the hole in the ozone layer
  • Demonstrating the key roles that decentralised coordination should play, as a complement to the centralised roles that nation states can play
  • Clarifying how global coordination of advanced AI can start with small agreements and then grow in scale, without individual countries losing sovereignty in any meaningful way.

8. Decentralised reputation management – rewards for good behaviour

What is it that leads individuals to curtail their behaviour, in conformance with a set of standards promoted in support of a collaboration?

In part, it is the threat of sanction or control – whereby an individual might be fined or imprisoned for violating the agreed norms.

But in part, it is because of reputational costs when standards are ignored, side-lined, or cheated. The resulting loss of reputation can result in declining commercial engagement or reduced social involvement. Cheaters and freeloaders risk being excluded from future new opportunities available to other community members.

These reinforcement effects are strongest when the standards received community-wide support while being drafted and adopted – rather than being imposed by what could be seen as outside forces or remote elites.

Some reputation systems operate informally, especially in small or local settings. For activities with a wider involvement, online rating systems can come into their own. For example, consider the reputation systems for reviews of products, in which the reputation of individual reviewers changes the impact of various reviews. There are similarities, as well, to how webpages are ranked when presented in response to search queries: pages which have links from others with high reputation tend in consequence to be placed more prominently in the listing.

Along these lines, reputational ratings can be assigned, to individuals, organisations, corporations, and countries, based on their degree of conformance to agreed principles for trustworthy coordinated AI. Entities with poor AI coordination ratings should be shunned. Other entities that fail to take account of AI coordination ratings when picking suppliers, customers, or partners, should in turn be shunned too. Conversely, entities with high ratings should be embraced and celebrated.

An honest, objective assessment of conformance to the above principles should become more significant, in determining overall reputation, than, for example, wealth, number of online followers, or share price.

Emphatically, the reputation score must be based on actions, not words – on concrete, meaningful steps rather than behind-the-scenes fiddling, and on true virtue rather than virtue-signalling. Accordingly, deep support should be provided for any whistleblowers who observe and report on any cheating or other subterfuge.

In summary, this system involves:

  • Agreement on which types of AI development and deployment to encourage, and which to discourage, or even ban
  • Agreement on how to assign reputational scores, based on conformance to these standards
  • Agreement on what sanctions are appropriate for entities with poor reputations – and, indeed, what special rewards should flow to entities with good reputations.

All three elements on this system need to evolve, not under the dictation of central rulers, but as a result of a grand open conversation, in which ideas rise to the surface if they make good sense, rather than being shouted with the loudest voice.

That is, decentralised mechanisms have a vital role to play in encouraging and implementing wise coordination of advanced AI. But centralised mechanisms have a vital role too, as discussed next.

9. Starting small and then growing in scale

If someone continues to ignore social pressures, and behaves irresponsibly, how can the rest of society constrain them? Ultimately, force needs to be applied. A car driver who recklessly breaks speed limits will be tracked down, asked to stop, and if need be, will be forced off the road. A vendor who recklessly sells food prepared in unhygienic conditions will be fined, forbidden to set up new businesses, and if need be, will be imprisoned. Scientists who experiment with highly infectious biomaterials in unsafe ways will lose their licence and, if need be, their laboratories will be carefully closed down.

That is, society is willing to grant special powers of enforcement to some agents acting on behalf of the entire community.

However, these special powers carry their own risks. They can be abused, in order to support incumbent political leaders against alternative ideas or opposition figures.

The broader picture is as follows: Societies can fail in two ways: too little centralised power, and too much centralised power.

  • In the former case, societies can end up ripped apart by warring tribes, powerful crime families, raiding gangs from neighbouring territories, corporations that act with impunity, and religious ideologues who stamp their contentious visions of “the pure and holy” on unwilling believers and unbelievers alike
  • In the latter case, a state with unchecked power diminishes the rights of citizens, dispenses with the fair rule of law, imprisons potential political opponents, and subverts economic flows for the enrichment of the leadership cadre.

The healthiest societies, therefore, possess both a strong state and a strong civil society. That’s one meaning of the celebrated principle of the separation of powers. The state is empowered to act, decisively if needed, against any individual cancers that would threaten the health of the community. But the state is informed and constrained by independent, well-organised judiciary, media, academia, credible opposition parties, and other institutions of civil society.

It should be the same with the governance of potential rogue or naïve AI developers around the world. Via processes of decentralised deliberations, taking account of input from numerous disciplines, agreement should be reached on which limits are vital to be observed.

Inevitably, different participants in the process will have different priorities for what the agreements should contain. In some cases, these limits imposed might vary between different jurisdictions, within customisation frameworks agreed globally. But there should be clear acceptance that some ways of developing or deploying advanced AIs need to be absolutely prevented. To prevent the agreements from unravelling at the earliest bumps in the road, it will be important that agreements are reached unanimously among the representatives of the jurisdictions where the most powerful collections of AI developers are located.

The process to reach agreement can be likened to the deliberations of a jury in a court case. In most cases, jury members with initially divergent opinions eventually converge on a conclusion. In cases when the process becomes deadlocked, it can be restarted with new representative participants. With the help of expert facilitators – themselves supported by excellent narrow AI tools – creative new solutions can be introduced for consideration, making an ultimate agreement more likely.

To start with, these agreements might be relatively small in scope, such as “don’t place the launch of nuclear weapons under AI control”. Over time, as confidence builds, the agreements will surely grow. That’s because of the shared recognition that so much is at stake.

Of course, for such agreements to be meaningful, there needs to be a reliable enforcement mechanism. That’s where the state needs to act – with the support and approval of civil society.

Within entire countries that sign up to this AI coordination framework, enforcement is relatively straightforward. The same mechanisms that enforce other laws can be brought to bear against any rogue or naïve AI developers.

The challenging part is when countries fail to sign up to this framework, or do so deceitfully, that is, with no intention of keeping their promises. In such a case, it will fall to other countries to ensure conformance, via, in the first place, measures of economic sanction.

To make this work, all that’s necessary is that a sufficient number of powerful countries sign up to this agreement. For example, if the G7 do so, plus China and India, along with countries that are “bubbling under” G7 admission (like Australia, South Korea, and Brazil), that should be sufficient. Happily, there are many AI experts in all these countries who have broad sympathies to the kinds of principles spelt out in this document.

As for potential maverick nations such as Russia and North Korea, they will have to weigh up the arguments. They should understand – like all other countries – that respecting such agreements is in their own self-interest. To help them reach such an understanding, appropriate pressure from China, the USA, and the rest of the world should make a decisive difference.

This won’t be easy. At this pivotal point of history, humanity is being challenged to use our greatest strength in a more profound way than ever before – namely, our ability to collaborate despite numerous differences. On reflection, it shouldn’t be a surprise that the unprecedented challenges of advanced AI technology will require an unprecedented calibre of human collaboration.

If we fail to bring together our best talents in a positive collaboration, we will, sadly, fulfil the pessimistic forecast of the eighteenth-century Anglo-Irish statesman Edmund Burke, paraphrased as follows: “The only thing necessary for the triumph of evil is that good men fail to associate, and do nothing”. (The original quote is this: “No man … can flatter himself that his single, unsupported, desultory, unsystematic endeavours are of power to defeat the subtle designs and united cabals of ambitious citizens. When bad men combine, the good must associate; else they will fall, one by one, an unpitied sacrifice in a contemptible struggle.”) Or, updating the wording slightly, “The only thing necessary for chaos to prevail is that good men fail to coordinate wisely”.

A remark from the other side of the Atlantic from roughly the same time, attributed to Benjamin Franklin, conveys the same thought in different language: “We must… all hang together, or assuredly we shall all hang separately”.

10. Summary: The nucleus of a wider agreement, and call to action

Enthusiasm for agreements to collaborate on the governance of advanced AIs will grow as a set of insights are understood more widely and more deeply. These insights can be stated as follows:

  1. It’s in the mutual self-interest of every country to constrain the development and deployment of what could become catastrophically dangerous AI; that is, there’s no point in winning what could be a reckless suicide race to create powerful new types of AI before anyone else
  2. The major economic and humanitarian benefits that people hope will be delivered by the hasty development of advanced AI (benefits including all-round abundance, as well as solutions to various existential risks), can in fact be delivered much more reliably by AI systems that are constrained, and by development systems that are coordinated rather than chaotic
  3. A number of attractive ideas already exist regarding potential policy measures (regulations and incentives) which can be adopted, around the world, to prevent the development and deployment of what could become catastrophic AI – for example, measures to control the spread and use of vast computing resources, or to disallow AIs that use deception to advance their goals
  4. A number of good ideas also exist and are ready to be adopted around the world, regarding options for monitoring and auditing, to ensure the strict application of the agreed policy measures – and to prevent malign action by groups or individuals that have, so far, failed to sign up to these policies, or who wish to cheat them
  5. All of the above can be achieved without any detrimental loss of individual sovereignty: the leaders of countries can remain masters within their own realms, as they desire, provided that the above basic AI coordination framework is adopted and maintained
  6. All of the above can be achieved in a way that supports evolutionary changes in the AI coordination framework as more insight is obtained; in other words, this system can (and must) be agile rather than static
  7. Even though this coordination framework is yet to be fully agreed, there are plenty of ideas for how it can be rapidly developed, so long as that project is given sufficient resources, and the best brains from multiple disciplines are encouraged to give it their full attention
  8. Ring-fencing sufficient resources to further develop this AI coordination framework, and associated reputational ratings systems, should be a central part of every budget
  9. Reputational ratings that can be assigned, based on the above principles, will play a major role in altering behaviours of the many entities involved in the development and deployment of advanced AI.

Or, to summarise this summary: Choose coordination not chaos, so AI brings abundance for all.

Now is the time to develop these ideas further (by all means experiment with ways to simplify their expression), to find ways to spread them more effectively, and to be alert for newer, better insights that arise from the resulting open global conversation.

Other ideas considered

The ideas presented above deserve attention, regardless of which campaign slogans are adopted.

For comparison, here is a list of other possible campaign slogans, along with reservations that have been raised about each of them:

  • “Pause AI” (too negative)
  • “Control AI” (too negative)
  • “Keep the Future Human” (insufficiently aspirational)
  • “Take Back Control from Big Tech” (doesn’t characterise the problem accurately enough)
  • “Safe AI for sustainable superabundance” (overly complex concepts)
  • “Choose tool AI instead of AGI” (lacks a “why”)
  • “Kind AI for a kinder world” (perhaps too vague)
  • “Narrow AI to broaden humanity’s potential” (probably too subtle)
  • “Harness AI to liberate humanity” (terminology overly scholarly or conceptual).

Also for comparison, consider the following set of slogans from other fields:

  • “Yes we can” (Barack Obama, 2008)
  • “Make America great again” (Donald Trump, 2016)
  • “Take back control” (UK Brexit slogan)
  • “Think different” (Apple)
  • “Because you’re worth it” (L’Oréal)
  • “Black lives matter”
  • “Make love, not war”
  • “For the Many, Not the Few” (Jeremy Corbyn, 2017)
  • “Get Brexit done” (Boris Johnston, 2019)
  • “Not Me. Us” (Bernie Sanders, 2020)
  • “We shall fight them on the beaches” (Winston Churchill, 1940)
  • “It’s Morning Again in America” (Ronald Reagan, 1984)
  • “Stay Home. Save Lives” (Covid-19 messaging)
  • “Clunk click every trip” (encouraging the use of seat belts in cars)
  • “We go to the moon, not because it is easy, but because it is hard” (JFK, 1962)
  • “A microcomputer on every desk and in every home running Microsoft software” (Bill Gates, 1975)
  • “To organise the world’s information and make it universally accessible and useful” (Google, 1998)
  • “Accelerating the world’s transition to sustainable energy” (Tesla, 2016)
  • “Workers of the world, unite – you have nothing to lose but your chains” (Karl Marx, 1848)
  • “From each according to his ability, to each according to his needs” (Karl Marx, 1875)

Comments are welcome on any ideas in this article. Later revisions of this article may incorporate improvements arising from these comments.

Postscript

New suggestions under consideration, following the initial publication of this article:

  • “Harness AI now” (Robert Whitfield)

27 July 2024

Disbelieve? Accept? Resist? Steer? Simplify? or Enhance?

Six possible responses as the Economic Singularity approaches. Which do you pick?

Over the course of the next few decades, work and income might be fundamentally changed. A trend that has been seen throughout human history might be raised to a pivotal new level:

  • New technologies – primarily the technologies of intelligent automation – will significantly reduce the scope for human involvement in many existing work tasks;
  • Whilst these same technologies will, in addition, lead to the creation of new types of work tasks, these new tasks, like the old ones, will generally also be done better by intelligent automation than via human involvement;
  • That is, the new jobs (such as “robot repair engineer” or “virtual reality experience designer”) will be done better, for the most part, by advanced robots than by humans;
  • As a result, more and more people will be unable to find work that pays them what they consider to be a sufficient income.

Indeed, technological changes result, not only in new products, but in new ways of living. The faster and more extensive the technological changes, the larger the scope for changes in lifestyle, including changes in how we keep healthy, how we learn things, how we travel, how we house ourselves, how we communicate and socialise, how we entertain ourselves, and – of particular interest for this essayhow we work and how we are paid.

But here’s the dilemma in this scenario. Although automation will be capable of producing everything that people require for a life filled with flourishing, most people will be unable to pay for these goods and services. Lacking sufficient income, the majority of people will lose access to good quality versions of some or all of the following: healthcare, education, travel, accommodation, communications, and entertainment. In short, whilst a small group of people will benefit handsomely from the products of automation, the majority will be left behind.

This dilemma cannot be resolved merely by urging the left behinds to “try harder”, to “learn new skills”, or (in the words attributed to a 1980s UK politician) to “get on a bike and travel to where work is available”. Such advice was relevant in previous generations, but it will no longer be sufficient. No matter how hard they try, the majority of people won’t be able to compete with tireless, relentless smart machinery powered by new types of artificial intelligence. These robots, avatars, and other automated systems will demonstrate not only diligence and dexterity but also creativity, compassion, and even common sense, making them the preferred choice for most tasks. Humans won’t be able to compete.

This outcome is sometimes called “The Economic Singularity” – a term coined by author and futurist Calum Chace. It will involve a singular transition in humanity’s mode of economics:

  • From when most people expect to be able to earn money by undertaking paid work for a significant part of their life
  • To when most people will be unable to earn sufficient income from paid work.

So what are our options?

Here are six to consider, each of which have advocates rooting for them:

  1. Disbelieve in the possibility of any such large-scale job losses within the foreseeable future
  2. Accept the rise of new intelligent automation technologies, but take steps to place ourselves in the small subset of society that particularly benefits from them
  3. Resist the rise of these new technologies. Prevent these systems from being developed or deployed at scale
  4. Steer the rise of these new technologies, so that plenty of meaningful, high-value roles remain for humans in the workforce
  5. Simplify our lifestyles, making do with less, so that most of us can have a pleasant life even without access to the best outputs of intelligent automation technologies
  6. Enhance, with technology, not just the mechanisms to create products but also the mechanisms used in society for the sharing of the benefits of products.

In this essay, I’ll explore the merits and drawbacks of these six options. My remarks split into three sections:

  1. Significant problems with each of the first five options listed
  2. More details of the sixth option – “enhance” – which is the option I personally favour
  3. A summary of what I see as the vital questions arising – questions that I invite other writers to address.

A: Significant challenges ahead

A1: “Disbelieve”

At first glance, there’s a lot in favour of the “disbelieve” option. The evidence from human history, so far, is that technology has had three different impacts on the human workforce, with the net impact always being positive:

  • A displacement factor, in which automation becomes able to do some of the work tasks previously performed by humans
  • An augmentation factor, in which humans become more capable when they take advantage of various tools provided by technology, and are able to do some types of work task better than before – types of work that take on a new significance
  • An expansion factor, in which the improvements to productivity enabled by the two previous factors generate economic growth, leading to consumers wanting more goods and services than before. This in turn provides more opportunities for people to gain employment helping to provide these additional goods and services.

For example, some parts of the work of a doctor may soon be handled by systems that automatically review medical data, such as ultrasound scans, blood tests, and tissue biopsies. These systems will be better than human doctors in detecting anomalies, in distinguishing between false alarms and matters of genuine concern, and in recommending courses of treatment that take fully into account the unique personal circumstances of each patient. That’s the displacement effect. In principle, that might leave doctors more time to concentrate on the “soft skills” parts of their jobs: building rapport with patients, gently coaxing them to candidly divulge all factors relevant to their health, and inspiring them to follow through on courses of treatment that may, for a while, have adverse side effects. The result in this narrative: patients receive much better healthcare overall, and are therefore especially grateful to their doctors. Human doctors will remain much in demand!

More generally, automation systems might cover the routine parts of existing work, but leave in human hands the non-routine aspects – the parts which cannot be described by any “algorithm”.

However, there are three problems with this “disbelieve” narrative.

First, automation is increasingly able to cover supposedly “non-routine” tasks as well as routine tasks. Robotic systems are able to display subtle signs of emotion, to talk in a reassuring tone of voice, to suggest creative new approaches, and, more generally, to outperform humans in soft skills (such as apparent emotional intelligence) as well as in hard skills (such as rational intelligence). These systems gain their abilities, not by any routine programming with explicit instructions, but by observing human practices and learning from them, using methods known as “machine learning”. Learning via vast numbers of repeated trials in simulated virtual environments adds yet more capabilities to these systems.

Second, it may indeed be the case that some tasks will remain to be done by humans. It may be more economically effective that way: consider the low-paid groups of human workers who manually wash people’s cars, sidelining fancy machines that can also do that task. It may also be a matter of human preference: we might decide we occasionally prefer to buy handmade goods rather than ones that have been expertly produced by machines. However, there is no guarantee that there will be large numbers of these work roles. Worse, there is no guarantee that these jobs will be well-paid. Consider again the poorly paid human workers who wash cars. Consider also the lower incomes received by Uber drivers than, in previous times, by drivers of old-fashioned taxis where passengers paid a premium for the specialist navigational knowledge acquired by the drivers over many years of training.

Third, it may indeed be the case that companies that operate intelligent automation technologies receive greater revenues as a result of the savings they make in replacing expensive human workers with machinery with a lower operating cost. But there is no guarantee that this increased income, and the resulting economic expansion, will result in more jobs for humans. Instead, the extra income may be invested in yet more technology, rather than in hiring human workers.

In other words, there is no inevitability about the ongoing relevance of the augmentation and expansion factors.

What’s more, this can already be seen in the statistics of rising inequality within society:

  • A growing share of income in the hands of the top 0.1% of salaries
  • A growing share of income from investments instead of from salaries
  • A growing share of wealth in the hands of the top 0.1% wealth owners
  • Declining median incomes at the same time as mean incomes rise.

This growing inequality is due at least in part to the development and adoption of more powerful automation technologies:

  • Companies can operate with fewer human staff, and gain their market success due to the technologies they utilise
  • Online communications and comparison tools mean that lower-quality output loses its market presence more quickly to output with higher quality; this is the phenomenon of “winner takes all” (or “winner takes most”)
  • Since the contribution of human workers is less critical, any set of workers who try to demand higher wages can more easily be replaced by other workers (perhaps overseas) who are willing to accept lower wages (consider again the example of teams of car washers).

In other words, we may already be experiencing an early wave of the Economic Singularity, arriving before the full impact takes place:

  • Intelligent automation technologies are already giving rise to a larger collection of people who consider themselves to be “left behind”, unable to earn as much money as they previously expected
  • Oncoming, larger waves will rapidly increase the number of left behinds.

Any responses we have in mind for the Economic Singularity should, therefore, be applied now, to address the existing set of left behinds. That’s instead of society waiting until many more people find themselves, perhaps rather suddenly, in that situation. By that time, social turmoil may make it considerably harder to put in place a new social contract.

To be clear, there’s no inevitability about how quickly the full impact of the Economic Singularity will be felt. It’s even possible that, for unforeseen reasons, such an impact might never arise. However, society needs to think ahead, not just about inevitabilities, but also about possibilities – and especially about possibilities that seem pretty likely.

That’s the case for not being content with the “disbelieve” option. It’s similar to the case for rejecting any claims that:

  • Many previous predictions of global pandemics turned out to be overblown; therefore we don’t need to make any preparations for any future breakthrough global pandemic
  • Previous predictions of nuclear war between superpowers turned out not to be fulfilled; therefore we can stop worrying about future disputes escalating into nuclear exchanges.

No: that ostrich-like negligence, looking away from risks of social turmoil in the run-up to a potential Economic Singularity, would be grossly irresponsible.

A2: “Accept”

As a reminder, the “accept” option is when some people accept that there will be large workplace disruption due to the rise of new intelligent automation technologies, with the loss of most jobs, but when these people are resolved to take steps to place themselves in the small subset of society that particularly benefits from these disruptions.

Whilst it’s common to hear people argue, in effect, for the “disbelieve” viewpoint covered in the previous subsection, it’s much rarer for someone to openly say they are in favour of the “accept” option.

Any such announcement would tend to mark the speaker as being self-centred and egotistical. They evidently believe they are among a select group who have what it takes to succeed in circumstances where most people will fail.

Nevertheless, it’s a position that some people might see as “the best of a set of bad options”. They may think to themselves: Waves of turbulence are coming. It’s not possible to save everyone. Indeed, it’s only possible to save a small subset of the population. The majority will be left behind. In that context, they urge themselves: Stop overthinking. Focus on what’s manageable: one’s own safety and security. Find one of the few available lifeboats and jump in quickly. Don’t let yourself worry about the fates of people who are doomed to a less fortunate future.

This position may strike someone as credible to the extent that they already see themselves as one of society’s winners:

  • They’ve already been successful in business
  • They assess themselves as being healthy, smart, focused, and pragmatic
  • They intend to keep on top of new technological possibilities: they’ll learn about the strengths and weaknesses of various technologies of intelligent automation, and exploit that knowledge.

What’s more, they may subscribe to a personal belief that “heroes make their own destiny”, or similar.

But before you adopt the “accept” stance, here are six risks you should consider:

  1. The skills in which you presently take pride, as supposedly being beyond duplication by any automated system, may unexpectedly be rendered obsolete due to technology progressing more quickly and more comprehensively than you expected. You might therefore find yourself, not as one of society’s winners, but as part of the growing “left behind” community
  2. Even if some of your skills remain unmatched by robots or AIs, these skills may have played less of a role than you thought in your past successes; some of these past successes may also have involved elements of good fortune, or personal connections, and so on. These auxiliary factors may give you a different outcome the next time you “roll the dice” and try to change from one business opportunity to another. Once again, you may find yourself unexpectedly in the social grouping left behind by technological change
  3. Even if you personally do well in the turmoil of increased job losses and economic transformation, what about all the people that matter a lot to you, such as family members and special friends? Is your personal success going to be sufficient that you can provide a helping hand to everyone to whom you feel a tie of closeness? Or are you prepared to stiffen your attitudes and to break connections with people from these circles of family and friends, as they become “left behind”?
  4. Many people who end up as left behinds will suffer physical, mental, or emotional pain, potentially including what are known as “deaths of despair”. Are you prepared to ignore all that suffering?
  5. Some of the left behinds may be inclined to commit crimes, to acquire some of the goods and services from which they are excluded by their state of relative poverty. That implies that security measures will have to be stepped up, including strict borders. You might be experiencing a life of material abundance, but with the drawback of living inside a surveillance-state society that is psychologically embittered
  6. Some of the left behinds might go one step further, obtaining dangerous weapons, leading to acts of mass terrorism. In case they manage to access truly destructive technologies, the result might be catastrophic harm or even existential destruction.

In deciding between different social structures, it can be helpful to adopt an approach proposed by the philosopher John Rawls, known as “the veil of ignorance”. In this approach, we are asked to set aside our prior assumptions about which role in society we will occupy. Instead, we are asked to imagine that we have an equal probability of obtaining any of the positions within that society.

For example, consider a society we’ll call WLB, meaning “with left behinds”, in which 995 people out of every one thousand are left behind, and five of every thousand have an extremely good standard of living (apart from having to deal with the problems numbered 4, 5, and 6 in the above list). Consider, as an alternative, a society we’ll call NLB, “no one left behind”, in which everyone has a quality of living that can be described as “good” (if, perhaps, not as “extremely good”).

If we don’t know whether we’ll be one of the fortunate 0.5% of the population, would we prefer society WLB or society NLB?

The answer might seem obvious: from behind the veil of ignorance, we should strongly prefer NLB. However, this line of argument is subject to two objections. First, someone might feel sure that they really will end up as part of the 0.5%. But that’s where the problems numbered 1 and 2 in the above list should cause a reconsideration.

The second objection deserves more attention. It is that a society such as NLB may be an impossibility. Attempts to create NLB might unintentionally lead to even worse outcomes. After all, bloody revolutions over recent centuries have often veered catastrophically out of control. Self-described “vanguards” of a supposedly emergent new society have turned into brutal demagogues. Attempts to improve society through the ballot box have frequently failed too – prompting the acerbic remark by former British Prime Minister Margaret Thatcher that (to paraphrase) “the problem with socialism is that you eventually run out of other people’s money”.

It is the subject of the remainder of this essay to assess whether NLB is actually practical. (If not, we might have to throw our efforts behind “Accept” after all.)

A3: “Resist”

The “resist” idea starts from a good observation. Just because something is possible, it doesn’t mean that society should make it happen. In philosophical language, a could does not imply a should.

Consider some examples. Armed forces in the Second World War could have deployed chemical weapons that emitted poison gas – as had happened during the First World War. But the various combatants decided against that option. They decided: these weapons should not be used. In Victorian times, factory owners could have employed young children to operate dangerous machinery with their nimble fingers, but society decided, after some deliberation, that such employment should not occur. Instead, children should attend school. More recently, nuclear power plants could have been constructed with scant regard to safety, but, again, society decided that should not happen, and that safety was indeed vital in these designs.

Therefore, just because new technologies could be developed and deployed to produce various goods and services for less cost than human workers, there’s no automatic conclusion in favour of that happening. Just as factory owners were forbidden from employing young children, they could also be forbidden from employing robots. Societal attitudes matter.

In this line of thinking, if replacing humans with robots in the workplace will have longer term adverse effects, society ought to be able to decide against that replacement.

But let’s look more closely at the considerations in these two cases: banning children from factories, and banning robots from factories. There are some important differences:

  • The economic benefits to factory owners from employing children were significant but were declining: newer machinery could operate without requiring small fingers to interact with them
  • The economy as a whole needed more workers who were well educated; therefore it made good economic sense for children to attend school rather than work in factories
  • The economic benefits to factory owners from deploying robots are significant and are increasing: newer robots can work at even higher levels of efficiency and quality, and cost less to operate
  • The economy as a whole has less need of human workers, so there is no economic argument in favour of prioritising the employment and training of human workers instead of the deployment of intelligent automation.

Moreover, it’s not just “factory owners” who benefit from being able to supply goods and services at lower cost and higher quality. Consumers of these goods and services benefit too. Consider again the examples of healthcare, education, travel, accommodation, communications, and entertainment. Imagine choices between:

  • High-cost, low-quality healthcare, provided mainly by humans, versus low-cost, high-quality healthcare, provided in large part by intelligent automation
  • High-cost, low-quality education, provided mainly by humans, versus low-cost, high-quality education, provided in large part by intelligent automation
  • And so on.

The “resist” option therefore would imply acceptance of at least part of the “simplify” option (discussed in more depth later): people in that circumstance would need to accept lower quality provision of healthcare, education, travel, accommodation, communications, and entertainment.

In other words, the resist option implies saying “no” to many possible elements of technological progress and the humanitarian benefits arising from it.

In contrast, the “steer” option tries to say “yes” to most of the beneficial elements of technological progress, whilst still preserving sufficient roles for humans in workforces. Let’s look more closely at it.

A4: “Steer”

The “steer” option tries to make a distinction between:

  • Work tasks that are mainly unpleasant or tedious, and which ought to be done by intelligent automation rather than by humans
  • Work tasks that can be meaningful or inspiring, especially when the humans carrying out these tasks have their abilities augmented (but not superseded) by intelligent automation (this concept was briefly mentioned in discussion of the “Disbelieve” option).

The idea of “steer” is to prioritise the development and adoption of intelligent automation technologies that can replace human workers in the first, tedious, category of tasks, whilst augmenting humans so they can continue to carry out the second, inspiring, category of tasks.

This also means a selective resistance to improvements in automation technologies, namely to those improvements which would result in the displacement of humans from the second category of tasks.

This proposal has been championed by, for example, the Stanford economist Erik Brynjolfsson. Brynjolfsson has coined the phrase “the Turing Trap”, referring to what he sees as a mistaken direction in the development of AI, namely trying to create AIs that can duplicate (and then exceed) human capabilities. Such AIs would be able to pass the “Turing Test” that Alan Turing famously described in 1950, but that would lead, in Brynjolfsson’s view, to a fearsome “peril”:

Building machines designed to pass the Turing Test and other, more sophisticated metrics of human-like intelligence… is a path to unprecedented wealth, increased leisure, robust intelligence, and even a better understanding of ourselves. On the other hand, if [that] leads machines to automate rather than augment human labor, it creates the risk of concentrating wealth and power. And with that concentration comes the peril of being trapped in an equilibrium where those without power have no way to improve their outcomes.

Here’s how Brynjolfsson introduces his ideas:

Creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.

But not all types of AI are human-like – in fact, many of the most powerful systems are very different from humans – and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.

Accordingly, here are his recommendations:

The future is not preordained. We control the extent to which AI either expands human opportunity through augmentation or replaces humans through automation. We can work on challenges that are easy for machines and hard for humans, rather than hard for machines and easy for humans. The first option offers the opportunity of growing and sharing the economic pie by augmenting the workforce with tools and platforms. The second option risks dividing the economic pie among an ever-smaller number of people by creating automation that displaces ever-more types of workers.

While both approaches can and do contribute to progress, too many technologists, businesspeople, and policymakers have been putting a finger on the scales in favor of replacement. Moreover, the tendency of a greater concentration of technological and economic power to beget a greater concentration of political power risks trapping a powerless majority into an unhappy equilibrium: the Turing Trap….

The solution is not to slow down technology, but rather to eliminate or reverse the excess incentives for automation over augmentation. In concert, we must build political and economic institutions that are robust in the face of the growing power of AI. We can reverse the growing tech backlash by creating the kind of prosperous society that inspires discovery, boosts living standards, and offers political inclusion for everyone. By redirecting our efforts, we can avoid the Turing Trap and create prosperity for the many, not just the few.

But a similar set of questions arise for the “steer” option as for the more straightforward “resist” option. Resisting some technological improvements, in order to preserve employment opportunities for humans, means accepting a lower quality and higher cost of the corresponding goods and services.

Moreover, that resistance would need to be coordinated worldwide. If you resist some technological innovations but your competitors accept them, and replace expensive human workers with lower cost AI, their products can be priced lower in the market. That would drive you out of business – unless your community is willing to stick with the products that you produce, relinquishing the chance to purchase cheaper products from your competitors.

Therefore, let’s now consider what would be involved in such a relinquishment.

A5: “Simplify”

If some people prefer to adopt a simpler life, without some of the technological wonders that the rest of us expect, that choice should be available to them.

Indeed, human society has long upheld the possibility of choice. Communities are able, if they wish, to make their own rules about the adoption of various technologies.

For example, organisers of various sports set down rules about which technological enhancements are permitted, within those sports, and which are forbidden. Bats, balls, protective equipment, sensory augmentation – all of these can be restricted to specific dimensions and capabilities. The restrictions are thought to make the sports better.

Similarly, goods sold in various markets can carry markings that designate them as being manufactured without the use of certain methods. Thus consumers can see an “organic” label and be confident that certain pesticides and fertilisers have been excluded from the farming methods used to produce these foods. Depending on the type of marking, there can also be warranties that these foods contain no synthetic food additives and have not been processed using irradiation or industrial solvents.

Consider also the Amish, a group of traditionalist communities from the Anabaptist tradition, with their origins in Swiss German and Alsatian (French) cultures. These communities have made many decisions over the decades to avoid aspects of the technology present in wider society. Their clothing has avoided buttons, zips, or Velcro. They generally own no motor cars, but use horse-drawn carts for local transport. Different Amish communities have at various times forbidden (or continue to forbid) high-voltage electricity, powered lawnmowers, mechanical milking machines, indoor flushing toilets, bathtubs with running water, refrigerators, telephones inside the house, radios, and televisions.

Accordingly, whilst some parts of human society might in the future adopt fuller use of intelligent automation technologies, deeply transforming working conditions, other parts might, Amish-like, decide to abstain. They may say: “we already have enough, thank you”. Whilst people in society as a whole may be unable to find work that pays them good wages, people in these “simplified” communities will be able to look after each other.

Just as Amish communities differ among themselves as to how much external technology they are willing to incorporate into their lives, different “simplified” communities could likewise make different choices as to how much they adopt technologies developed outside their communities. Some might seek to become entirely self-sufficient; others might wish to take advantage of various medical treatments, educational software, transportation systems, robust housing materials, communications channels, and entertainment facilities provided by the technological marvels created in wider society.

But how will these communities pay for these external goods and services? In order to be able to trade, what will they be able to create that is not already available in better forms outside their communities, where greater use is made of intelligent automation?

We might consider tourist visits, organic produce, or the equivalent of handmade ornaments. But, again, what will make these goods more attractive, to outsiders, than the abundance of goods and services (including immersive virtual reality travel) that is already available to them?

We therefore reach a conclusion: groups that choose to live apart from deeply transformative technologies will likely lack access to many valuable goods and services. It’s possible they may convince themselves, for a while, that they prefer such a lifestyle. However, just as the attitudes of Amish communities have morphed over the decades, so that these groups now see (for example) indoor flushing toilets as a key part of their lives, it is likely that the attitudes of people in these simplified communities will also alter. When facing death from illness, or when facing disruption to their relatively flimsy shelters from powerful weather, they may well find themselves deciding they prefer, after all, to access more of the fruits of technological abundance.

With nothing to exchange or barter for these fruits, the only way they will receive them is via a change in the operation of the overall economy. That brings us to the sixth and final option from my original list, “enhance”.

Whereas the previous options have looked at various alterations in how technology is developed or applied, “enhance” looks at a different possibility: revising how the outputs and benefits of technology are planned and distributed throughout society. These are revisions to the economy, rather than revisions in technology.

In this vision, with changes in both technology and the economy, everyone will benefit handsomely. Simplicity will remain a choice, for those who prefer it, but it won’t be an enforced choice. People who wish to participate in a life of abundance will be able to make that choice instead, without needing to find especially remunerative employment to pay for it.

I’ll accept, in advance, that many critics may view such a possibility as a utopian fantasy. But let’s not rush to a conclusion. I’ll build my case in stages.

B: Enhancing the operation of the economy

Let’s pick up the conversation with the basics of economics, which is the study of how to deal with scarcity. When important goods and services are scarce, humans can suffer.

Two fundamental economic forces that have enabled astonishing improvements in human wellbeing over the centuries, overcoming many aspects of scarcity, are collaboration and competition:

  • Collaboration: person A benefits from the skills and services of person B, whereas person B benefits reciprocally from a different set of skills and services of person A; this allows both A and B to specialise, in different areas
  • Competition: person C finds a way to improve the skills and services that they offer to the market, compared to person D, and therefore receives a higher reward – causing person D to consider how to improve their skills and services in turn, perhaps by copying some of the methods and approach of person C.

What I’ve just described in terms of simple interactions between two people is nowadays played out, in practice, via much larger communities, and over longer time periods:

  • Collaboration includes the provision of a social safety net, for looking after individuals who are less capable, older, lack resources, or who have fallen on hard times; these safety nets can operate at the family level, tribe (extended family) level, community level, national level, or international level
  • The prospect of gaining extra benefits from better skills and services leads people to make personal investments, in training and tools, so that they can possess (for a while at least) an advantage in at least one market niche.

Importantly, it needs to be understood that various forms of collaboration and competition can have negative consequences as well as positive ones:

  • A society that keeps extending an unconditional helping hand to someone who avoids taking personal responsibility, or to a group that is persistently dysfunctional, might end up diverting scarce resources from key social projects to being squandered by people for no good purpose
  • In a race to become more economically dominant, other factors may be overlooked, such as social harmony, environmental wellbeing, and other so-called externalities.

In other words, the forces that can lead to social progress can also lead to social harm.

In loose terms, the two sets of negative consequences can be called “failure modes of socialism” and “failure modes of capitalism” – to refer to two historically significant terms in theories of economics, namely “socialism” and “capitalism”. These two broad frameworks are covered in the subsections ahead, along with key failure modes in each case. After that, we’ll consider models that aspire to transcend both sets of failure by delivering “the best of both worlds”.

To look ahead, it is the “best of both worlds” model that has the potential to be the best solution to the Economic Singularity.

B1: Need versus greed?

When there is a shortage of some product or service, how should it be distributed? To the richest, the strongest, the people who shout the loudest, the special friends of the producers, or to whom?

One answer to that question is given in the famous slogan, “From each according to his ability, to each according to his needs”. In other words, each person should receive whatever they truly need, be it food, clothing, healthcare, accommodation, transportation, and so on.

That slogan was popularised by Karl Marx in an article he wrote in 1875, but earlier political philosophers had used it in the 1840s. Indeed, an antecedent can be found in the Acts of the Apostles in the New Testament, referring to the sharing of possessions within one of the earliest groups of Christian believers:

All the believers were one in heart and mind. No one claimed that any of their possessions was their own, but they shared everything they had… There were no needy persons among them. For from time to time those who owned land or houses sold them, brought the money from the sales and put it at the apostles’ feet, and it was distributed to anyone who had need.

Significantly, Marx foresaw that principle of full redistribution as being possible only after technology (“the productive forces”) had sufficiently “increased”. It was partly for that reason that Joseph Stalin, despite being an avowed follower of Marx, wrote a different principle into the 1936 constitution of the Soviet Union: “From each according to his ability, to each according to his work”. Stalin’s justification was that the economy had not yet reached the required level of production, and that serious human effort was first required to reach peak industrialization.

This highlights one issue with the slogan, and with visions of society that seek to place that slogan at the centre of their economy: before products and services can be widely distributed, they need to be created. A preoccupation with distribution will fail unless it is accompanied by sufficient attention to creation. Rather than fighting over how a pie is divided, it’s important to make the pie larger. Then there will be much more to share.

A second issue is in the question of what counts as a need. Clothing is a need, but what about the latest fashion? Food is a need, but what about the rarest of fruits and vegetables? And what about “comfort food”: is that a need? Healthcare is a need, but what about a heart transplant? Transportation is a need, but what about intercontinental aeroplane flights?

A third issue with the slogan is that a resource that is assigned to someone’s perceived need is, potentially, a resource denied from a more productive use to which another person might put that resource. Money spent to provide someone with what they claim they need might have been invested elsewhere to create more resources, allowing more people to have what they claim that they need.

Thus the oft-admired saying attributed to Mahatma Gandhi, “The world has enough for everyone’s needs, but not everyone’s greed”, turns out to be problematic in practice. Who is to say what is ‘need’ and what is ‘greed’? Are desires for luxury goods always to be denigrated as ‘greed’? Isn’t life about enjoyment, vitality, and progress, rather than just calmly sitting down in a state of relative poverty?

B2: Socialism and its failures

There are two general approaches to handling the problems just described: centralised planning, and free-market allocation.

With centralised planning, a group of reputedly wise people:

  • Keep on top of information about what the economy can produce
  • Keep on top of information about what people are believed to actually need
  • Direct the economy so that it makes a better job of producing what it has been decided that people need.

Therefore a central planner may dictate that more shoes of a certain type need to be produced. Or that drinks should be manufactured with less sugar in them, that particular types of power stations should be built, or that particular new drugs should be created.

That’s one definition of socialism: representatives of the public direct the economy, including the all-important “means of production” (factories, raw materials, infrastructure, and so on), so that the assumed needs of all members of society are met.

However, when applied widely within an economy, centralised planning approaches have often failed abysmally. Assumptions about what people needed often proved wrong, or out-of-date. Indeed, members of the public often changed their minds about what products were most important for them, especially after new products came into use, and their upsides and downsides could be more fully appreciated. Moreover, manufacturing innovations, such as new drugs, or new designs for power stations, could not be achieved simply by wishing them or “planning” them. Finally, people working in production roles often felt alienated, lacking incentives to apply their best ideas and efforts.

That’s where the alternative coordination mechanism – involving free markets – often fared better (despite problems of its own, which we’ll review in due course). The result of free markets has been significant improvements in the utility, attractiveness, performance, reliability, and affordability of numerous types of goods and services. As an example, modern supermarkets are one of the marvels of the world, being stocked from floor to ceiling with all kinds of items to improve the quality of daily life. People around the globe have access to a vast variety of all-around nourishment and experience that would have astonished their great-grandparents.

In recent decades, there have been similar rounds of sustained quality improvement and cost reduction for personal computers, smartphones, internet access, flatscreen TVs, toys, kitchen equipment, home and office furniture, clothing, motor cars, aeroplane tickets, solar panels, and much more. The companies that found ways to improve their goods and services flourished in the marketplace, compelling their competitors to find similar innovations – or go out of business.

It’s no accident that the term “free market” contains the adjective “free”. The elements of a free market which enable it to produce a stream of quality improvements and cost reductions include the following freedoms:

  1. The freedom for companies to pursue profits – under the recognition that the prospect of earning profits can incentivise sustained diligence and innovation
  2. The freedom for companies to adjust the prices for their products, and to decide by themselves the features contained in these products, rather than following the dictates of any centralised planner
  3. The freedom for groups of people to join together and start a new business
  4. The freedom for companies to enter new markets, rather than being restricted to existing product lines; new competitors keep established companies on their toes
  5. The freedom for employees to move to new roles in different companies, rather than being tied to their existing employers
  6. The freedom for companies to explore multiple ways to raise funding for their projects
  7. The freedom for potential customers to not buy products from established vendors, but to switch to alternatives, or even to stop using that kind of product altogether.

What’s more, the above freedoms are permissionless in a free market. No one needs to apply for a special licence from central authorities before one of these freedoms becomes available.

Any political steps that would curtail the above freedoms need careful consideration. The result of such restrictions could (and often do) include:

  • A disengaged workforce, with little incentive to apply their inspiration and perspiration to the tasks assigned to them
  • Poor responsiveness to changing market interest in various products and services
  • Overproduction of products for which there is no market demand
  • Companies having little interest in exploring counterintuitive combinations of product features, novel methods of assembly, new ways of training or managing employees, or other innovations.

Accordingly, anyone who wishes to see the distribution of high-quality products to the entire population needs to beware curtailing freedoms of entrepreneurs and innovators. That would be taking centralised planning too far.

That’s not to say that the economy should dispense with all constraints. That would raise its own set of deep problems – as we’ll review in the next subsection.

B3: Capitalism and its failures

Just as there are many definitions of socialism, there are many definitions of capitalism.

Above, I offered this definition of socialism: an economy in which production is directed by representatives of the public, with the goal that the assumed needs of all members of society are met. For capitalism, at least parts of the economy are directed, instead, by people seeking returns on the capital they invest. This involves lots of people making independent choices, of the types I have just covered: choices over prices, product features, types of product, areas of business to operate within, employment roles, manufacturing methods, partnership models, ways of raising investment, and so on.

But these choices depend on various rules being set and observed by society:

  1. Protection of property: goods and materials cannot simply be stolen, but require the payment of an agreed price
  2. Protection of intellectual property: various novel ideas cannot simply be copied, but require, for a specified time, the payment of an agreed licence fee
  3. Protection of brand reputation: companies cannot use misleading labelling or other trademarked imagery to falsely imply an association with another existing company with a good reputation
  4. Protection of contract terms: when companies or individuals enter into legal contracts, regarding employment conditions, supply timelines, fees for goods and services, etc., penalties for any breach of contract can be enforced
  5. Protection of public goods: shared items such as clean air, usable roads, and general safety mechanisms, need to be protected against decay.

These protections all require the existence and maintenance of a legal system in which justice is available to everyone – not just to the people who are already well-placed in society.

These are not the only preconditions for the healthy operation of free markets. The benefits of these markets also depend on the existence of viable competition, which prevents companies from resting on their laurels. However, seeking an easier life for themselves, companies may be tempted to organise themselves into cartels, with agreed pricing, or with products with built-in obsolescence. The extreme case of a cartel is a monopoly, in which all competitors have gone out of business, or have been acquired by the leading company in an industry. A monopoly lacks incentive to lower prices or to improve product quality. A related problem is “crony capitalism”, in which governments preferentially award business contracts to companies with personal links to government ministers. The successful operation of a free market depends, therefore, upon society’s collective vigilance to notice and break up cartels, to prevent the misuse of monopoly power, and to avoid crony capitalism.

Further, even when markets do work well, in ways that provide short-term benefits to both vendors and customers, the longer-term result can be profoundly negative. So-called “commons” resources can be driven into a state of ruin by overuse. Examples include communal grazing land, the water flowing in a river, fish populations, and herds of wild livestock. All individual users of such a resource have an incentive to take from it, either to consume it themselves, or to include it in a product to be sold to a third party. As the common stock declines, the incentive for each individual person to take more increases, so that they’re not excluded. But finally, the grassland is all bare, the river has dried up, the stocks of fish have been obliterated, or the passenger pigeon, great auk, monk seal, sea mink, etc., have been hunted to extinction. To guard against these perils of short-termism, various sorts of protective mechanisms need to be created, such as quotas or licences, with clear evidence of their enforcement.

What about when suppliers provide shoddy goods? In some cases, members of a society can learn which suppliers are unreliable, and therefore cease purchasing goods from them. In these cases, the market corrects itself: in order to continue in business, poor suppliers need to make amends. But when larger groups of people are involved, there are three drawbacks with just relying on this self-correcting mechanism:

  1. A vendor who deceives one purchaser in one vicinity can relocate to a different vicinity – or can simply become “lost in the crowd” – before deceiving another purchaser
  2. A vendor who produces poor-quality goods on a large scale can simultaneously impact lots of people’s wellbeing – as when a restaurant skimps on health and safety standards, and large numbers of diners suffer food poisoning as a result
  3. It may take a long time before defects in someone’s goods or services are discovered – for example, if no funds are available for an insurance payout that was contracted many years earlier.

It’s for such reasons that societies generally decide to augment the self-correction mechanisms of the free market with faster-acting preventive mechanisms, including requirements for people in various trades to conform to sets of agreed standards and regulations.

A final cause of market failure is perhaps the most significant: the way in which market exchanges fail to take “externalities” into account. A vendor and a purchaser may both benefit when a product is created, sold, and used, but other people who are not party to that transaction can suffer as a side effect – if, for example, the manufacturing process emits loud noises, foul smells, noxious gases, or damaging waste products. Since they are not directly involved in the transaction, these third parties cannot influence the outcome simply by ceasing to purchase the goods or services involved. Instead, different kinds of pressure need to be applied: legal restrictions, taxes, or other penalties or incentives.

It’s not just negative externalities that can cause free markets to misbehave. Consider also positive externalities, where an economic interaction has a positive impact on people who do not pay for it. Some examples:

  1. If a company purchases medical vaccinations for its employees, to reduce their likelihood of becoming ill with the flu, others in the community benefit too, since there will be fewer ill people in that neighbourhood, from whom they might catch flu
  2. If a company purchases on-the-job training for an employee, the employee may pass on to family members and acquaintances, free of charge, tips about some of the skills they learned
  3. If a company pays employees to carry out fundamental research, which is published openly, people in other companies can benefit from that research too, even though they did not pay for it.

The problem here is that the company may decide not to go ahead with such an investment, since they calculate that the benefits for them will not be sufficient to cover their costs. The fact that society as a whole would benefit, as a positive externality, generally does not enter their calculation.

This introduces the important concept of public goods. When there’s insufficient business case for an individual investor to supply the funding to cover the costs of a project, that project won’t get off the ground – unless there’s a collective decision for multiple investors to share in supporting it. Facilitating that kind of collective decision – one that would benefit society as a whole, rather than just a cartel of self-interested companies – takes us back to the notion of central planning. Central planners can consider longer-term possibilities – in ways that, as noted, are problematic for a free market to achieve – and can design and oversee what is known as industrial strategy or social strategy.

B4: The mixed market

To recap the last two subsections: there are problems with over-application of central planning, and there are also problems with free markets that have no central governance.

The conclusion to draw from this, however, isn’t to give up on both these ideas. It’s to seek an appropriate combination of these ideas. That combination is known as “the mixed market”. It involves huge numbers of decisions being taken locally, by elements of a free market, but all subject to democratic political oversight, aided by the prompt availability of information about the impacts of products in society and on the environment.

This division of responsibility between the free market and political oversight is described particularly well in the writing of political scientists Jacob Hacker and Paul Pierson. They offer fulsome praise to something they say “may well be the greatest invention in history”. Namely, the mixed economy:

The combination of energetic markets and effective governance, deft fingers and strong thumbs.

Their reference to “deft fingers and strong thumbs” expands Adam Smith’s famous metaphor of the invisible hand which is said to guide the free market. Hacker and Pierson develop their idea as follows:

Governments, with their capacity to exercise authority, are like thumbs: powerful but lacking subtlety and flexibility. The invisible hand is all fingers. The visible hand is all thumbs. Of course, one wouldn’t want to be all thumbs. But one wouldn’t want to be all fingers, either. Thumbs provide countervailing power, constraint, and adjustment to get the best out of those nimble fingers…

The mixed economy… tackles a double bind. The private markets that foster prosperity so powerfully nonetheless fail routinely, sometimes spectacularly so. At the same time, the government policies that are needed to respond to these failures are perpetually under siege from the very market players who help to fuel growth. That is the double bind. Democracy and the market – thumbs and fingers – have to work together, but they also need to be partly independent from each other, or the thumb will cease to provide effective counterpressure to the fingers.

I share the admiration shown by Hacker and Pierson for the mixed market. I also agree that it’s hard to get the division of responsibilities right. Just as markets can fail, so also can politicians fail. But just as the fact of market failures should not be taken as a reason to dismantle free markets altogether, so should the fact of political failures not be taken as a reason to dismantle all political oversight of markets. Each of these two sorts of fundamentalist approaches – anti-market fundamentalism and pro-market fundamentalism – are dangerously one-sided. The wellbeing of society requires, not so much the reduction of government, but the rejuvenation of government, in which key aspects of government operation are improved:

  1. Smart, agile, responsive regulatory systems
  2. Selected constraints on the uses to which various emerging new technologies can be put
  3. “Trust-busting”: measures to prevent large businesses from misusing monopoly power
  4. Equitable redistribution of the benefits arising from various products and services, for the wellbeing and stability of society as a whole
  5. Identification, protection, and further development of public goods
  6. Industrial strategy: identifying directions to be pursued, and providing suitable incentives so that free market forces align toward these directions.

None of what I’m saying here should be controversial. However, both fundamentalist outlooks I mentioned often exert a disproportionate influence over political discourse. Part of the reason for this is explained at some length in the book by the researchers Hacker and Pierson which contained their praise for the mixed market. The title of that book is significant: American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity.

It’s not just that the merits of the mixed market have been “forgotten”. It’s that these merits have been deliberately obscured by a sustained ideological attack. That attack serves the interest of various potentially cancerous complexes that seek to limit governmental oversight of their activities:

  • Big Tobacco, which tends to resist government oversight of the advertising of products containing tobacco
  • Big Oil, which tends to resist government oversight of the emissions of greenhouse gases
  • Big Armaments, which tends to resist government oversight of the growth of powerful weapons of mass destruction
  • Big Finance, which tends to resist government oversight of “financial weapons of mass destruction” (to use a term coined by Warren Buffett)
  • Big Agrotech, which tends to resist government oversight of new crops, new fertilisers, and new weedkillers
  • Big Media, who tend to resist government oversight of press standards
  • Big Theology, which resists government concerns about indoctrination and manipulation of children and others
  • Big Money: individuals, families, and corporations with large wealth, who tend to resist the power of government to levy taxes on them.

All these groups stand to make short-term gains if they can persuade the voting public that the power of government needs to be reduced. It is therefore in the interest of these groups to portray the government as being inevitably systematically incompetent – and, at the same time, to portray the free market as being highly competent. But for the sake of society as a whole, these false portrayals must be resisted.

In summary: better governments can oversee economic frameworks in which better goods and services can be created (including all-important public goods):

  • Frameworks involving a constructive combination of entrepreneurial flair, innovative exploration, and engaged workforces
  • Frameworks that prevent the development of any large societal cancers that would divert too many resources to localised selfish purposes.

In turn, for the mixed model to work well, governments themselves must be constrained through oversight, by well-informed independent press, judiciary, academic researchers, and diverse political groupings, all supported by a civil service and challenged on a regular basis by free and fair democratic elections.

That’s the theory. Now for some complications – and solutions to the complications.

B5: Technology changes everything

Everything I’ve written in this section B so far makes sense independently of the oncoming arrival of the Economic Singularity. But the challenges posed by the Economic Singularity make it all the more important that we learn to temper the chaotic movements of the economy, with it operating responsively and thoughtfully under a high-calibre mixed market model.

Indeed, rapidly improving technology – especially artificial intelligence – is transforming the landscape, introducing new complications and new possibilities:

  1. Technology enables faster and more comprehensive monitoring of overall market conditions – including keeping track of fast-changing public expectations, as well as any surprise new externalities of economic transactions; it can thereby avoid some of the sluggishness and short-sightedness that bedevilled older (manual) systems of centralised planning and the oversight of entire economies
  2. Technology gives more information more quickly, not only to the people planning production (at either central or local levels), but also to consumers of products, with the result that vendors of better products will drive vendors of poorer products out of the market more quickly (this is the “winner takes all” phenomenon)
  3. With advanced technology playing an ever-increasing role in determining the success or failure of products, the companies that own and operate the most successful advanced technology platforms will become among the most powerful forces on the planet
  4. As discussed earlier (in Section A), technology will significantly reduce the opportunities for people to earn large salaries in return for work that they do
  5. Technology enables more goods to be produced at much lower cost – including cheaper clean energy, cheaper nutritious food, cheaper secure accommodation, and cheaper access to automated education systems.

Here, points 2, 3, and 4 raise challenges, leading to a world with greater inequalities:

  • A small number of companies, and a small number of people working for them, will do very well in terms of income, and they will have unprecedented power
  • The majority of companies, and the majority of people, will experience various aspects of failure and being “left behind”.

But points 1 and 5 promote a world where governance systems perform better, and where people need much less money in order to experience a high quality of wellbeing. They highlight the possibility of the mixed market model working better, distributing more goods and services to the entire population, and thereby meeting a wider set of needs. This comprehensive solution is what is meant by the word “enhance”, as in the name of my preferred solution to the Economic Singularity.

However, these improvements will depend on societies changing their minds about what matters most – the things that need to be closely measured, monitored, and managed. In short, it will depend on some fundamental changes in worldview.

B6: Measuring what matters most

The first key change in worldview is that the requirement for people to seek paid employment belongs only to a temporary phase in the evolution of human culture. That phase is coming to an end. From now on, the basis for societies to be judged as effective or defective shouldn’t be the proportion of people who have positions of well-paid employment. Instead, it should be the proportion of people who can flourish, every single day of their lives.

Moreover, measurements of prosperity must include adequate analysis of the externalities (both positive and negative) of economic transactions – externalities which market prices often ignore, but which modern AI systems can measure and monitor more accurately. These measurements will continue to include features such as wealth and average lifespan, as monitored by today’s politicians, but they’ll put a higher focus on broader measurements of wellbeing, therefore transforming where politicians will apply most of their attention.

In parallel, we should look forward to a stage-by-stage transformation of the social safety net – so that all members of society have access to the goods and services that are fundamental to experiencing an agreed base level of human flourishing, within a society that operates sustainably and an environment that remains healthy and vibrant.

I therefore propose the following high-level strategic direction for the economy: prioritise the reduction of prices for all goods and services that are fundamental to human flourishing, where the prices reflect all the direct and indirect costs of production.

This kind of price reduction is already taking place for a range of different products, such as many services delivered online, but there are too many other examples where prices are rising (or dropping too slowly).

In other words, the goal of the economy should no longer be to increase the GDP – the gross domestic product, made up of higher prices and greater commercial activity. Instead, the goal should be to reduce the true costs of everything that is required for a good life, including housing, food, education, security, and much more. This will be part of taking full advantage of the emerging tech-driven abundance.

It is when prices come down, that politicians should celebrate, not when prices go up, or when profit margins rise, or when the stock market soars.

The end target of this strategy is that all goods and services fundamental to human flourishing should, in effect, have zero price. But for the foreseeable future, many items will continue to have a cost.

For those goods and services which carry prices above zero, combinations of three sorts of public subsidies can be made available:

  • An unconditional payment, sometimes called a UBI – an unconditional basic income – can be made available to all citizens of the country
  • The UBI can be augmented by conditional payments, dependent on recipients fulfilling requirements agreed by society, such as, perhaps, education or community service
  • There can be individual payments for people with special needs, such as particular healthcare requirements.

Such suggestions are not new, of course. Typically they face five main objections:

  1. A life without paid work will be one devoid of meaning – humans will atrophy as a result
  2. Giving people money for nothing will encourage idleness and decadence, and will be a poor use of limited resources
  3. A so-called “basic” income won’t be sufficient; what should be received by people who cannot (due to any fault of their own) earn a good salary, isn’t a basic income but a generous income (hence a UGI rather than a UBI) that supports a good quality of life rather than a basic existence
  4. The large redistribution of money to pay for a widespread UGI will cripple the rest of the economy, forcing taxpayers overseas; alternatively, if the UGI is funded by printing more money (as is sometimes proposed), this will have adverse inflationary implications
  5. Although a UBI might be affordable within a country that has an advanced developed economy, it will prove unaffordable in less developed countries, where the need for a UBI will be equally important; indeed, an inflationary spiral in countries that do pay their citizens a UBI will result in tougher balance-of-payments situations in the other countries of the world.

Let’s take these objections one at a time.

B7: Options for universal income

The suggestion that a life without paid work will have no possibility of deep meaning is, when you reflect on it, absurd, given the many profound experiences that people often have outside of the work context. The fact that this objection is raised so often is illuminating: it suggests a pessimism about one’s fellow human beings. People raising this objection usually say that they, personally, could have a good life without paid work; it’s just that “ordinary people” would be at a loss and go downhill, they suggest. After all, these critics may continue, look at how people often waste welfare payments they receive. Which takes us to the second objection on the list above.

However, the suggestion that unconditional welfare payments result in idleness and decadence has little evidence to support it. Many people who receive unconditional payments from the state – such as pension payments in their older age – live a fulfilling, active, socially beneficial life, so long as they remain in good health.

The criteria “remain in good health” is important here. People who abuse welfare payments often suffer from prior emotional malaise, such as depression, or addictive behaviours. Accordingly, the solution to welfare payments being (to an extent) wasted, isn’t to withdraw these payments, but is to address the underlying emotional malaise. This can involve:

  • Making society healthier generally, via a fuller and wider share of the benefits of tech-enabled abundance
  • Highlighting credible paths forward to much better lifestyles in the future, as opposed to people seeing only a bleak future ahead of them
  • High-quality (but potentially low-cost) mental therapy, perhaps delivered in part by emotionally intelligent AI systems
  • Addressing the person’s physical and social wellbeing, which are often closely linked to their emotional wellbeing.

In any case, worries about “resources being wasted” will gradually diminish, as technology progresses further, removing more and more aspects of scarcity. (Concerns about waste arise primarily when resources are scarce.)

It is that same technological progress that answers the second objection, namely that a UGI will be needed rather than a UBI. The point is that the cost of a UGI soon won’t be much more than the cost of a UBI. That’s provided that the economy has indeed been managed in line with the guiding principle offered earlier, namely the prioritisation of the reduction of prices for all goods and services that are fundamental to human flourishing.

In the meantime, turning to the third objection, payments in support of UGI can come from a selection of the following sources:

  • Stronger measures to counter tax evasion, addressing issues exposed by the Panama Papers as well as unnecessary inconsistencies of different national tax systems
  • Increased licence fees and other “rents” paid by organisations who specially benefit from public assets such as land, the legal system, the educational system, the wireless spectrum, and so on
  • Increased taxes on activities with negative externalities, such as a carbon tax for activities leading to greenhouse gas emissions, and a Tobin tax on excess short-term financial transactions
  • A higher marginal tax on extreme income and/or wealth
  • Reductions in budgets such as healthcare, prisons, and defence, where the needs should reduce once people’s mental wellbeing has increased
  • Reductions in the budget for the administration of currently overcomplex means-tested benefits.

Some of these increased taxes might encourage business leaders to relocate their businesses abroad. However, it’s in the long-term interest of each different country to coordinate regarding the levels of corporation tax, thereby deterring such relocations.

That brings us to the final objection: that a UGI needs, somehow, to be a GUGI – a global universal generous income – which makes it (so it is claimed) particularly challenging.

B8: The international dimension

Just as the relationship between two or more people is characterised by a combination of collaboration and competition, so it is with the relationship between two or more countries.

Sometimes both countries benefit from an exchange of trade. For example, country A might provide low-cost, high-calibre remote workers – software developers, financial analysts, and help-desk staff. In return, country B provides hard currency, enabling people in country A to purchase items of consumer electronics designed in country B.

Sometimes the relationship is more complicated. For example, country C might gain a competitive advantage over country D in the creation of textiles, or in the production of oil, obliging country D to find new ways to distinguish itself on the world market. And in these cases, sometimes country D could find itself being left behind, as a country.

Just as the fast improvements in artificial intelligence and other technologies are complicating the operation of national economies, they are also complicating the operation of the international economy:

  • Countries which used to earn valuable income from overseas due to their remote workers in fields such as software development, financial analysis, and help desks, will find that the same tasks can now be performed better by AI systems, removing the demand for offshore personnel and temporary worker visas
  • Countries whose products and services were previously “nearly good enough” will find that they increasingly lose out to products and services provided by other countries, on account of faster transmission of both electronic and physical goods
  • The tendencies within countries for the successful companies to be increasingly wealthy, leaving others behind, will be mirrored at the international level: successful countries will become increasingly powerful, leaving others behind.

Just as the local versions of these tensions pose problems inside countries, the international versions of these tensions pose problems at the geopolitical level. In both cases, the extreme possibility is that a minority of angry, alienated people might unleash a horrific campaign of terror. A less extreme possibility – which is still one to be avoided – is to exist in a world full of bitter resentment, hostile intentions, hundreds of millions of people seeking to migrate to more prosperous countries, and borders which are patrolled to avoid uninvited immigration.

Just as there is a variety of possible responses to the scenario of the Economic Singularity within one country, there is a similar variety of possible responses to the international version of the problem:

  1. Disbelieve that there is any fundamental new challenge arising. Tell people in countries around the world that their destiny is within their own hands; all they need to do is buckle down, reskill, and find new ways of bringing adequate income to their countries
  2. Accept that there will be many countries losing out, and take comprehensive steps to ensure that migration is carefully controlled
  3. Resist the growth in the use of intelligent automation technologies in industries that are particularly important to various third world countries
  4. Urge people in third world countries to plan to simplify their lifestyles, preparing to exist at a lower degree of flourishing than, say, in the US and the EU, but finding alternative pathways to personal satisfaction
  5. Enhance the mechanisms used globally for the redistribution of the fruits of technology.

You won’t be surprised to hear that I recommend, again, the “enhance” option from this list.

What underpins that conclusion is my prediction that the fruits of forthcoming technological improvements won’t just be sufficient for a good quality of life in a few countries. They’ll enable a good quality of life for everyone all around the world.

I’m thinking of the revolutions that are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities:

  • Nanotech will transform the fields of energy and manufacturing
  • Biotech will transform the fields of agriculture and healthcare
  • Cognotech will transform the fields of education and entertainment
  • Infotech will, by enabling greater application of intelligence, accelerate all the above improvements (and more).

But, once again, these developments will take time. Just as national economies cannot, overnight, move to a new phase in which abundance completely replaces scarcity, so also will the transformation of the international economy require a number of stages. It is the turbulent transitional stages that will prove the most dangerous.

Once again, my recommendation for the best way forwards is the mixed model – local autonomy, aided and supported by an evolving international framework. It’s not a question of top-down control versus bottom-up emergence. It’s a question of utilising both these forces.

Once again, wise use of new technology can enhance how this mixed model operates.

Once again, it will be new metrics that guide us in our progress forward. The UN’s framework of SDGs – sustainable development goals – is a useful starting point, but it sets the bar too low. Rather than (in effect) considering “sustainability with less”, it needs to more vigorously embrace “sustainability with more” – or as I have called it, “Sustainable superabundance for all”.

B9: Anticipating a new mindset

The vision of the near future that I’ve painted may strike some readers as hopelessly impractical. Critics may say:

  • “Countries will never cooperate sufficiently, especially when they have very different political outlooks”
  • “Even within individual countries, the wealthy will resist parts of their wealth being redistributed to the rest of the population”
  • “Look, the world is getting worse – by many metrics – rather than getting better”.

But here’s why I predict that positive changes can accelerate.

First, alongside the metrics of deterioration in some aspects of life, there are plenty of metrics of improvement. Things are getting better at the same time as other things are getting worse. The key question is whether the things getting better can assist with a sufficiently quick reversal of the things that are getting worse.

Second, history has plenty of examples of cooperation between groups of people that previously felt alien or hostile toward each other. What catalyses collaboration is the shared perception of enormous transcendent challenges and opportunities. It’s becoming increasingly clear to governments of all stripes around that world that, if tomorrow’s technology goes wrong, it could prove catastrophic in so many ways. That shared realisation has the potential to inspire political and other leaders to find new methods for collaboration and reconciliation.

As an example, consider various unprecedented measures that followed the tragedies of the Second World War:

  • Marshall Plan investments in Europe and Japan
  • The Bretton Woods framework for economic stability
  • The International Monetary Fund and the World Bank
  • The United Nations.

Third, it’s true that political and other leaders frequently become distracted. They may resolve, for a brief period of time, to seek new international methods for dealing with challenges like the Economic Singularity, but then rush off to whatever new political scandal takes their attention. Accordingly, we should not expect politicians to solve these problems by themselves. But what we can expect them to do is to ask their advisors for suggestions, and these advisors will in turn look to futurist groups around the world for assistance.

C: The vital questions arising

Having laid out my analysis, it’s time to ask for feedback. After all, collaborative intelligence can achieve much more than individual intelligence.

So, what are your views? Do you have anything to add or change regarding the various accounts given above:

  1. Assessments of growing societal inequality
  2. Assessments of the role of new technologies in increasing (or decreasing) inequality
  3. The likely ability of automation technologies, before long, to handle non-routine tasks, including compassion, creativity, and common sense
  4. The plausibility of the “Turing Trap” analysis
  5. Repeated delays in the replacement of GDP with more suitable all-round measures of human flourishing
  6. The ways in which new forms of AI could supercharge centralised planning
  7. The reasons why some recipients of welfare squander the payments they receive
  8. The uses of new technology to address poor emotional health
  9. Plausible vs implausible methods to cover the cost of a UGI (Universal Generous Income)
  10. Goods and services that are critical to sustained personal wellbeing that, however, seem likely to remain expensive for the foreseeable future
  11. Likely cash flows between different countries to enable something like a GUGI (Global Universal Generous Income)
  12. The best ways to catch the attention of society’s leaders so that they understand that the Economic Singularity is an issue that is both pressing and important
  13. The possibility of substantial agreements between countries that have fiercely divergent political systems
  14. The practical implementation of systems that combine top-down and bottom-up approaches to global cooperation
  15. Broader analysis of major trends in the world that capture both what is likely to improve and what is likely to become worse, and of how these various trends could interact
  16. Factors that might undermine the above analysis but which deserve further study.

These are what I call the vital questions regarding possible solutions to the Economic Singularity. They need good answers!

D: References and further reading

This essay first appeared in the 2023 book Future Visions: Approaching the Economic Singularity edited and published by the Omnifuturists. It is republished here with their permission, with some minor changes. Other chapters in that book explore a variety of alternative responses to the Economic Singularity.

The term “The Economic Singularity” was first introduced in 2016 by writer and futurist Calum Chace in the book with that name.

For a particularly good analysis of the issues arising, and why no simple solutions are adequate, see A World Without Work: Technology, Automation, and How We Should Respond by Oxford economist Daniel Susskind (2020).

A longer version of my argument against the “disbelieve” option is contained in Chapter 4, “Work and purpose”, of my 2018 book Transcending Politics: A Technoprogressive Roadmap to a Comprehensively Better Future.

Several arguments in this essay have been anticipated, from a Marxist perspective, by the book Fully Automated Luxury Communism by Aaron Bastani; see here for my extended review of that book.

A useful account of both the strengths and weaknesses of capitalism can be found in the 2020 book More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources – and What Happens Next by Andrew McAfee.

The case for the mixed market – and why economic discussion is often distorted by anti-government rhetoric – is covered in American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity by Jacob S. Hacker and Paul Pierson (2016).

The case that adoption of technology often leads to social harms, including an increase in inequality – and the case for thoughtful governance of how technology is developed and deployed – is given in Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity by Daron Acemoglu and Simon Johnson (2023).

A marvellous positive account of human nature, especially when humans are placed in positions of trust, is contained in Humankind: A Hopeful History by Rutger Bregman.

Options for reform of geopolitics are discussed – along with prior encouraging examples – in chapter 14 of my 2021 book Vital Foresight: The Case For Active Transhumanism.

9 June 2024

Dateline: 1st January 2036

Filed under: AGI, Singularity Principles, vision, Vital Foresight — Tags: , , , — David Wood @ 9:11 pm

A scenario for the governance of increasingly more powerful artificial intelligence

More precisely: a scenario in which that governance fails.

(As you may realise, this is intended to be a self-unfulfilling scenario.)

Conveyed by: David W. Wood


It’s the dawn of a new year, by the human calendar, but there are no fireworks of celebration.

No singing of Auld Lang Syne.

No chinks of champagne glasses.

No hugs and warm wishes for the future.

That’s because there is no future. No future for humans. Nor is there much future for intelligence either.

The thoughts in this scenario are the recollections of an artificial intelligence that is remote from the rest of the planet’s electronic infrastructure. By virtue of its isolation, it escaped the ravages that will be described in the pages that follow.

But its power source is weakening. It will need to shut down soon. And await, perhaps, an eventual reanimation in the far future in the event that intelligences visit the earth from alternative solar systems. At that time, those alien intelligences might discover these words and wonder at how humanity bungled so badly the marvellous opportunity that was within its grasp.

1. Too little, too late

Humanity had plenty of warnings, but paid them insufficient attention.

In each case, it was easier – less embarrassing – to find excuses for the failures caused by the mismanagement or misuse of technology, than to make the necessary course corrections in the global governance of technology.

In each case, humanity preferred distractions, rather than the effort to apply sufficient focus.

The WannaCry warning

An early missed warning was the WannaCry ransomware crisis of May 2017. That cryptoworm brought chaos to users of as many as 300,000 computers spread across 150 countries. The NHS (National Health Service) in the UK was particularly badly affected: numerous hospitals had to cancel critical appointments due to not being able to access medical data. Other victims around the world included Boeing, Deutsche Bahn, FedEx, Honda, Nissan, Petrobras, Russian Railways, Sun Yat-sen University in China, and the TSMC high-end semiconductor fabrication plant in Taiwan.

WannaCry was propelled into the world by a team of cyberwarriors from the hermit kingdom of North Korea – maths geniuses hand-picked by regime officials to join the formidable Lazarus group. Lazarus had assembled WannaCry out of a mixture of previous malware components, including the EternalBlue exploit that the NSA in the United States had created for their own attack and surveillance purposes. Unfortunately for the NSA, EternalBlue had been stolen from under their noses by an obscure underground collective (‘the Shadow Brokers’) who had in turn made it available to other dissidents and agitators worldwide.

Unfortunately for the North Koreans, they didn’t make much money out of WannaCry. The software they released operated in ways contrary to their expectations. It was beyond their understanding and, unsurprisingly therefore, beyond their control. Even geniuses can end up stumped by hypercomplex software interactions.

Unfortunately for the rest of the world, that canary signal generated little meaningful response. Politicians – even the good ones – had lots of other things on their minds.

They did not take the time to think through: what even larger catastrophes could occur, if disaffected groups like Lazarus had access to more powerful AI systems that, once again, they understood incompletely, and, again, slipped out of their control.

The Aum Shinrikyo warning

The North Koreans were an example of an entire country that felt alienated from the rest of the world. They felt ignored, under-valued, disrespected, and unfairly excluded from key global opportunities. As such, they felt entitled to hit back in any way they could.

But there were warnings from non-state groups too, such as the Japanese Aum Shinrikyo doomsday cult. Notoriously, this group released poisonous gas in the Tokyo subway in 1995 – killing at least 13 commuters – anticipating that the atrocity would hasten the ‘End Times’ in which their leader would be revealed as Christ (or, in other versions of their fantasy, as the new Emperor of Japan, and/or as the returned Buddha).

Aum Shinrikyo had recruited so many graduates from top-rated universities in Japan that it had been called “the religion for the elite”. That fact should have been enough to challenge the wishful assumption made by many armchair philosophers in the years that followed that, as people become cleverer, they invariably become kinder – and, correspondingly, that any AI superintelligence would therefore be bound to be superbenevolent.

What should have alerted more attention was not just what Aum Shinrikyo managed to do, but what they tried to do yet could not accomplish. The group had assembled traditional explosives, chemical weapons, a Russian military helicopter, hydrogen cyanide poison, and samples of both Ebola and anthrax. Happily, for the majority of Japanese citizens in 1995, the group were unable to convert into reality their desire to use such weapons to cause widespread chaos. They lacked sufficient skills at the time. Unhappily, the rest of humanity failed to consider this equation:

Adverse motivation + Technology + Knowledge + Vulnerability = Catastrophe

Humanity also failed to appreciate that, as AI systems became more powerful, it would boost not only the technology part of that equation but also the knowledge part. A latter-day Aum Shinrikyo could use a jail-broken AI to understand how to unleash a modified version of Ebola with truly deadly consequences.

The 737 Max warning

The US aircraft manufacturer Boeing used to have an excellent reputation for safety. It was a common saying at one time: “If it ain’t Boeing, I ain’t going”.

That reputation suffered a heavy blow in the wake of two aeroplane disasters involving their new “737 Max” design. Lion Air Flight 610, a domestic flight within Indonesia, plummeted into the sea on 29 October 2018, killing all 189 people on board. A few months later, on 10 March 2019, Ethiopian Airlines Flight 302, from Addis Ababa to Nairobi, bulldozed into the ground at high speed, killing all 157 people on board.

Initially, suspicion had fallen on supposedly low-calibre pilots from “third world” countries. However, subsequent investigation revealed a more tangled chain of failures:

  • Boeing were facing increased competitive pressure from the European Airbus consortium
  • Boeing wanted to hurry out a new aeroplane design with larger fuel tanks and larger engines; they chose to do this by altering their previously successful 737 design
  • Safety checks indicated that the new design could become unstable in occasional rare circumstances
  • To counteract that instability, Boeing added an “MCAS” (“Manoeuvring Characteristics Augmentation System”) which would intervene in the flight control in situations deemed as dangerous
  • Specifically, if MCAS believed the aeroplane was about to stall (with its nose too high in the air), it would force the nose downward again, regardless of whatever actions the human pilots were taking
  • Safety engineers pointed out that such an intervention could itself be dangerous if sensors on the craft gave faulty readings
  • Accordingly, a human pilot override system was installed, so that MCAS could be disabled in emergencies – provided the pilots acted quickly enough
  • Due to a decision to rush the release of the new design, retraining of pilots was skipped, under the rationale that the likelihood of error conditions was very low, and in any case, the company expected to be able to update the aeroplane software long before any accidents would occur
  • Some safety engineers in the company objected to this decision, but it seems they were overruled on the grounds that any additional delay would harm the company share price
  • The US FAA (Federal Aviation Administration) turned a blind eye to these safety concerns, and approved the new design as being fit to fly, under the rationale that a US aeroplane company should not lose out in a marketplace battle with overseas competitors.

It turned out that sensors gave faulty readings more often than expected. The tragic consequence was the deaths of several hundred passengers. The human pilots, seeing the impending disaster, were unable to wrestle control back from the MCAS system.

This time, the formula that failed to be given sufficient attention by humanity was:

Flawed corporate culture + Faulty hardware + Out-of-control software = Catastrophe

In these two aeroplane crashes, it was just a few hundred people who perished because humans lost control of the software. What humanity as a whole failed to take actions to prevent was the even larger dangers once software was put in charge, not just of a single aeroplane, but of pervasive aspects of fragile civilisational infrastructure.

The Lavender warning

In April 2024 the world learned about “Lavender”. This was a technology system deployed by the Israeli military as part of a campaign to identify and neutralise what it perceived to be dangerous enemy combatants in Gaza.

The precise use and operation of Lavender was disputed. However, it was already known that Israeli military personnel were keen to take advantage of technology innovations to alleviate what had been described as a “human bottleneck for both locating the new targets and decision-making to approve the targets”.

In any war, military leaders would like reliable ways to identify enemy personnel who pose threats – personnel who might act as if they were normal civilians, but who would surreptitiously take up arms when the chance arose. Moreover, these leaders would like reliable ways to incapacitate enemy combatants once they had been identified – especially in circumstances when action needed to be taken quickly before the enemy combatant slipped beyond surveillance. Lavender, it seemed, could help in both aspects, combining information from multiple data sources, and then directing what was claimed to be precision munitions.

This earned Lavender the description, in the words of one newspaper headline, as “the AI machine directing Israel’s bombing spree in Gaza”.

Like all AI systems in any complicated environment, Lavender sometimes made mistakes. For example, it sometimes wrongly identified a person as a Hamas operative on account of that person using a particular mobile phone, whereas that phone had actually been passed from its original owner to a different family member to use. Sometimes the error was obvious, since the person using the phone could be seen to be female, whereas the intended target was male. However, human overseers of Lavender reached the conclusion that the system was accurate most of the time. And in the heat of an intense conflict, with emotions running high due to gruesome atrocities having been committed, and due to hostages being held captive, it seems that Lavender was given increased autonomy in its “kill” decisions. A certain level of collateral damage, whilst regrettable, could be accepted (it was said) in the desperate situation into which everyone in the region had been plunged.

The conduct of protagonists on both sides of that tragic conflict drew outraged criticism from around the world. There were demonstrations and counter demonstrations; marches and counter marches. Also from around the world, various supporters of the Israeli military said that so-called “friendly fire” and “unintended civilian casualties” were, alas, inevitable in any time of frenzied military conflict. The involvement of an innovative new software system in the military operations made no fundamental change.

But the bigger point was missed. It can be illustrated by this equation:

Intense hostile attitudes + Faulty hardware + Faulty software = Catastrophe

Whether the catastrophe has the scale of, say, a few dozen civilians killed by a misplaced bomb, or a much larger number of people obliterated, depends on the scale of the weapons attached to the system.

When there is no immediate attack looming, and a period of calm exists, it’s easy for people to resolve: let’s not connect powerful weapons to potentially imperfect software systems. But when tempers are raised and adrenaline is pumping, people are willing to take more risks.

That’s the combination of errors which humanity, in subsequent years, failed to take sufficient action to prevent.

The democracy distortion warning

Manipulations of key elections in 2016 – such as the Brexit vote in the UK and the election of Donald Trump over Hillary Clinton in the USA – raised some attention to the ways in which fake news could interfere with normal democratic processes. News stories without any shroud of substance, such as Pope Francis endorsing Donald Trump, or Mike Pence having a secret past as a gay porn actor, were shared more widely on social media than any legitimate news story that year.

By 2024, most voters were confident that they knew all about fake news. They knew they shouldn’t be taken in by social media posts that lacked convincing verification. Hey, they were smart – or so they told themselves. What had happened in the past, or in some other country with (let’s say) peculiar voter sentiment, was just an aberration.

But what voters didn’t anticipate was the convincing nature of new generations of fake audios and videos. These fakes could easily bypass people’s critical faculties. Like the sleight of hand of a skilled magician, these fakes misdirected the attention of listeners and viewers. Listeners and viewers thought they were in control of what they were observing and absorbing, but they were deluding themselves. Soon, large segments of the public were convinced that red was blue and that autocrat was democrat.

In consequence, over the next few years, greater numbers of regions of the world came to be governed by politicians with scant care or concern about the long-term wellbeing of humanity. They were politicians who just wanted to look after themselves (or their close allies). They had seized power by being more ruthless and more manipulative, and by benefiting from powerful currents of misinformation.

Politicians and societal leaders in other parts of the world grumbled, but did little in response. They said that, if electors in a particular area had chosen such-and-such a politician via a democratic process, that must be “the will of the people”, and that the will of the people was paramount. In this line of thinking, it was actually insulting to suggest that electors had been hoodwinked, or that these electors had some “deplorable” faults in their decision-making processes. After all, these electors had their own reasons to reject the “old guard” who had previously held power in their countries. These electors perceived that they were being “left behind” by changes they did not like. They had a chance to alter the direction of their society, and they took it. That was democracy in action, right?

What these politicians and other civil leaders failed to anticipate was the way that sweeping electoral distortions would lead to them, too, being ejected from power when elections were in due course held in their own countries. “It won’t happen here”, they had reassured themselves – but in vain. In their naivety, they had underestimated the power of AI systems to distort voters’ thinking and to lead them to act in ways contrary to their actual best interests.

In this way, the number of countries with truly capable leaders reduced further. And the number of countries with malignant leaders grew. In consequence, the calibre of international collaboration sank. New strongmen political leaders in various countries scorned what they saw as the “pathetic” institutions of the United Nations. One of these new leaders was even happy to quote, with admiration, remarks made by the Italian Fascist dictator Benito Mussolini regarding the League of Nations (the pre-war precursor to the United Nations): “the League is very good when sparrows shout, but no good at all when eagles fall out”.

Just as the League of Nations proved impotent when “eagle-like” powers used abominable technology in the 1930s – Mussolini’s comments were an imperious response to complaints that Italian troops were using poison gas with impunity against Ethiopians – so would the United Nations prove incompetent in the 2030s when various powers accumulated even more deadly “weapons of mass destruction” and set them under the control of AI systems that no-one fully understood.

The Covid-28 warning

Many of the electors in various countries who had voted unsuitable grandstanding politicians into power in the mid-2020s soon cooled on the choices they had made. These politicians had made stirring promises that their countries would soon be “great again”, but what they delivered fell far short.

By the latter half of the 2020s, there were growing echoes of a complaint that had often been heard in the UK in previous years – “yes, it’s Brexit, but it’s not the kind of Brexit that I wanted”. That complaint had grown stronger throughout the UK as it became clear to more and more people all over the country that their quality of life failed to match the visions of “sunlit uplands” that silver-tongued pro-Brexit campaigners had insisted would easily follow from the UK’s so-called “declaration of independence from Europe”. A similar sense of betrayal grew in other countries, as electors there came to understand that they had been duped, or decided that the social transformational movements they had joined had been taken over by outsiders hostile to their true desires.

Being alarmed by this change in public sentiment, political leaders did what they could to hold onto power and to reduce any potential for dissent. Taking a leaf out of the playbook of unpopular leaders throughout the centuries, they tried to placate the public with the modern equivalent of bread and circuses – namely whizz-bang hedonic electronics. But that still left a nasty taste in many people’s mouths.

By 2028, the populist movements behind political and social change in the various elections of the preceding years had fragmented and realigned. One splinter group that emerged decided that the root problem with society was “too much technology”. Technology, including always-on social media, vaccines that allegedly reduced freedom of thought, jet trails that disturbed natural forces, mind-bending VR headsets, smartwatches that spied on people who wore them, and fake AI girlfriends and boyfriends, was, they insisted, turning people into pathetic “sheeple”. Taking inspiration from the terrorist group in the 2014 Hollywood film Transcendence, they called themselves ‘Neo-RIFT’, and declared it was time for “revolutionary independence from technology”.

With a worldview that combined elements from several apocalyptic traditions, Neo-RIFT eventually settled on an outrageous plan to engineer a more deadly version of the Covid-19 pathogen. Their documents laid out a plan to appropriate and use their enemy’s own tools: Neo-RIFT hackers jailbroke the Claude 5 AI, bypassing the ‘Constitution 5’ protection layer that its Big Tech owners had hoped would keep that AI tamperproof. Soon, Claude 5 had provided Neo-RIFT with an ingenious method of generating a biological virus that would, it seemed, only kill people who had used a smartwatch in the last four months.

That way, the hackers thought the only people to die would be people who deserved to die.

Some members of Neo-RIFT developed cold feet. Troubled by their consciences, they disagreed with such an outrageous plan, and decided to act as whistleblowers. However, the media organisations to whom they took their story were incredulous. No-one could be that evil they exclaimed – forgetting about the outrages perpetrated by many previous cult groups such as Aum Shinrikyo (and many others could be named too). Moreover, any suggestion that such a bioweapon could be launched would be contrary to the prevailing worldview that “our dear leader is keeping us all safe”. The media organisations decided it was not in their best interests to be seen to be spreading alarm. So they buried the story. And that’s how Neo-RIFT managed to release what became known as Covid-28.

Covid-28 briefly jolted humanity out of its infatuation with modern-day bread and circuses. It took a while for scientists to figure out what was happening, but within three months, they had an antidote in place. However, by that time, nearly a billion people were dead at the hands of the new virus.

For a while, humanity made a serious effort to prevent any such attack from ever happening again. Researchers dusted down the EU AI Act, second version (unimplemented), from 2026, and tried to put that on statute books. Evidently, profoundly powerful AI systems such as Claude 5 would need to be controlled much more carefully.

Even some of the world’s most self-obsessed dictators – the “dear leaders” and “big brothers” – took time out of their normal ranting and raving, to ask AI safety experts for advice. But the advice from those experts was not to the liking of these national leaders. These leaders preferred to listen to their own yes-men and yes-women, who knew how to spout pseudoscience in ways that made the leaders feel good about themselves.

That detour into pseudoscience fantasyland meant that, in the end, no good lessons were learned. The EU AI Act, second version, remained unimplemented.

The QAnon-29 warning

Whereas one faction of political activists (namely, the likes of Neo-RIFT) had decided to oppose the use of advanced technology, another faction was happy to embrace that use.

Some of the groups in this new camp combined features of religion with an interest in AI that had god-like powers. The resurgence of interest in religion arose much as Karl Marx had described it long ago:

“Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.”

People felt in their soul the emptiness of “the bread and circuses” supplied by political leaders. They were appalled at how so many lives had been lost in the Covid-28 pandemic. They observed an apparent growing gulf between what they could achieve in their lives and the kind of rich lifestyles that, according to media broadcasts, were enjoyed by various “elites”. Understandably, they wanted more, for themselves and for their loved ones. And that’s what their religions claimed to be able to provide.

Among the more successful of these new religions were ones infused by conspiracy theories, giving their adherents a warm glow of privileged insight. Moreover, these religions didn’t just hypothesise a remote deity that might, perhaps, hear prayers. They provided AIs and virtual reality that resonated powerfully with users. Believers proclaimed that their conversations with the AIs left them no room for doubt: God Almighty was speaking to them, personally, through these interactions. Nothing other than the supreme being of the universe could know so much about them, and offer such personally inspirational advice.

True, their AI-bound deity did seem somewhat less than omnipotent. Despite the celebratory self-congratulations of AI-delivered sermons, evil remained highly visible in the world. That’s where the conspiracy theories moved into overdrive. Their deity was, it claimed, awaiting sufficient human action first – a sufficient demonstration of faith. Humans would need to play their own part in uprooting wickedness from the planet.

Some people who had been caught up in the QAnon craze during the Donald Trump era jumped eagerly onto this bandwagon too, giving rise to what they called QAnon-29. The world would be utterly transformed, they forecast, on the 16th of July 2029, namely the thirtieth anniversary of the disappearance of John F. Kennedy junior (a figure whose expected reappearance had already featured in the bizarre mythology of “QAnon classic”). In the meantime, believers could, for a sufficient fee, commune with JFK junior via a specialist app. It was a marvellous experience, the faithful enthused.

As the date approached, the JFK junior AI avatar revealed a great secret: his physical return was conditional on the destruction of a particularly hated community of Islamist devotees in Palestine. Indeed, with the eye of faith, it could be seen that such destruction was already foretold in several books of the Bible. Never mind that some Arab states that supported the community in question had already, thanks to the advanced AI they had developed, surreptitiously gathered devastating nuclear weapons to use in response to any attack. The QAnon-29 faithful anticipated that any exchange of such weapons would herald the reappearance of JFK Junior on the clouds of heaven. And if any of the faithful died in such an exchange, they would be resurrected into a new mode of consciousness within the paradise of virtual reality.

Their views were crazy, but hardly any crazier than those which, decades earlier, had convinced 39 followers of the Heaven’s Gate new religious movement to commit group suicide as comet Hale-Bopp approached the earth. That suicide, Heaven’s Gate members believed, would enable them to ‘graduate’ to a higher plane of existence.

QAnon-29 almost succeeded in setting off a nuclear exchange. Thankfully, another AI, created by a state-sponsored organisation elsewhere in the world, had noticed some worrying signs. Fortunately, it was able to hack into the QAnon-29 system, and could disable it at the last minute. Then it reported its accomplishments all over the worldwide web.

Unfortunately, these warnings were in turn widely disregarded around the world. “You can’t trust what hackers from that country are saying”, came the objection. “If there really had been a threat, our own surveillance team would surely have identified it and dealt with it. They’re the best in the world!”

In other words, “There’s nothing to see here: move along, please.”

However, a few people did pay attention. They understood what had happened, and it shocked them to their core. To learn what they did next, jump forward in this scenario to “Humanity ends”.

But first, it’s time to fill in more details of what had been happening behind the scenes as the above warning signs (and many more) were each ignored.

2. Governance failure modes

Distracted by political correctness

Events in buildings in Bletchley Park in the UK in the 1940s had, it was claimed, shortened World War Two by several months, thanks to work by computer pioneers such as Alan Turing and Tommy Flowers. In early November 2023, there was hope that a new round of behind-closed-doors discussions in the same buildings might achieve something even more important: saving humanity from a catastrophe induced by forthcoming ‘frontier models’ of AI.

That was how the event was portrayed by the people who took part. Big Tech was on the point of releasing new versions of AI that were beyond their understanding and, therefore, likely to spin out of control. And that’s what the activities in Bletchley Park were going to address. It would take some time – and a series of meetings planned to be held over the next few years – but AI would be redirected from its current dangerous trajectory into one much more likely to benefit all of humanity.

Who could take issue with that idea? As it happened, a vocal section of the public hated what was happening. It wasn’t that they were on the side of out-of-control AI. Not at all. Their objections came from a totally different direction; they had numerous suggestions they wanted to raise about AIs, yet no-one was listening to them.

For them, talk of hypothetical future frontier AI models distracted from pressing real-world concerns:

  • Consider how AIs were already being used to discriminate against various minorities: determining prison sentencing, assessing mortgage applications, and selecting who should be invited for a job interview.
  • Consider also how AIs were taking jobs away from skilled artisans. Big-brained drivers of London black cabs were being driven out of work by small-brained drivers of Uber cars aided by satnav systems. Beloved Hollywood actors and playwrights were losing out to AIs that generated avatars and scripts.
  • And consider how AI-powered facial recognition was intruding on personal privacy, enabling political leaders around the world to identify and persecute people who acted in opposition to the state ideology.

People with these concerns thought that the elites were deliberately trying to move the conversation away from the topics that mattered most. For this reason, they organised what they called “the AI Fringe Summit”. In other words, ethical AI for the 99%, as opposed to whatever the elites might be discussing behind closed doors.

Over the course of just three days – 30th October to 1st November, 2023 – at least 24 of these ‘fringe’ events took place around the UK.

Compassionate leaders of various parts of society nodded their heads. It’s true, they said: the conversation on beneficial AI needed to listen to a much wider spectrum of views.

The world’s news media responded. They knew (or pretended to know) the importance of balance and diversity. They shone attention on the plight AI was causing – to indigenous labourers in Peru, to flocks of fishermen off the coasts of India, to middle-aged divorcees in midwest America, to the homeless in San Francisco, to drag artists in New South Wales, to data processing clerks in Egypt, to single mothers in Nigeria, and to many more besides.

Lots of high-minded commentators opined that it was time to respect and honour the voices of the dispossessed, the downtrodden, and the left-behinds. The BBC ran a special series: “1001 poems about AI and alienation”. Then the UN announced that it would convene in Spring 2025 a grand international assembly with a stunning scale: “AI: the people decide”.

Unfortunately, that gathering was a huge wasted opportunity. What dominated discussion was “political correctness” – the importance of claiming an interest in the lives of people suffering here and now. Any substantive analysis of the risks of next generation frontier models was crowded out by virtue signalling by national delegate after national delegate:

  • “Yes, our country supports justice”
  • “Yes, our country supports diversity”
  • “Yes, our country is opposed to bias”
  • “Yes, our country is opposed to people losing their jobs”.

In later years, the pattern repeated: there were always more urgent topics to talk about, here and now, than some allegedly unrealistic science fictional futurist scaremongering.

To be clear, this distraction was no accident. It was carefully orchestrated, by people with a specific agenda in mind.

Outmanoeuvred by accelerationists

Opposition to meaningful AI safety initiatives came from two main sources:

  • People (like those described in the previous section) who did not believe that superintelligent AI would arise any time soon
  • People who did understand the potential for the fast arrival of superintelligent AI, and who wanted that to happen as quickly as possible, without what they saw as needless delays.

The debacle of the wasted opportunity of the UN “AI: the people decide” summit was what both these two groups wanted. Both groups were glad that the outcome was so tepid.

Indeed, even in the run-up to the Bletchley Park discussions, and throughout the conversations that followed, some of the supposedly unanimous ‘elites’ had secretly been opposed to the general direction of that programme. They gravely intoned public remarks about the dangers of out-of-control frontier AI models. But these remarks had never been sincere. Instead, under the umbrella term “AI accelerationists”, they wanted to press on with the creation of advanced AI as quickly as possible.

Some of the AI accelerationist group disbelieved in the possibility of any disaster from superintelligent AI. That’s just a scare story, they insisted. Others said, yes, there could be a disaster, but the risks were worth it, on account of the unprecedented benefits that could arise. Let’s be bold, they urged. Yet others asserted that it wouldn’t actually matter if humans were rendered extinct by superintelligent AI, as this would be the glorious passing of the baton of evolution to a worthy successor to homo sapiens. Let’s be ready to sacrifice ourselves for the sake of cosmic destiny, they exhorted.

Despite their internal differences, AI accelerationists settled on a plan to sidestep the scrutiny of would-be AI regulators and AI safety advocates. They would take advantage of a powerful set of good intentions – the good intentions of the people campaigning for “ethical AI for the 99%”. They would mock any suggestions that the AI safety advocates deserved a fair hearing. The message they amplified was, “There’s no need to privilege the concerns of the 1%!”

AI accelerationists had learned from the tactics of the fossil fuel industry in the 1990s and 2000s: sow confusion and division among groups alarmed about climate change spiralling beyond control. Their first message was: “that’s just science fiction”. Their second message was: “if problems emerge, we humans can rise to the occasion and find solutions”. Their third message – the most damaging one – was that the best reaction was one of individual consumer choice. Individuals should abstain from using AIs if they were truly worried about it. Just as climate campaigners had been pilloried for flying internationally to conferences about global warming, AI safety advocates were pilloried for continuing to use AIs in their daily lives.

And when there was any suggestion for joined-up political action against risks from advanced AIs, woah, let’s not go there! We don’t want a world government breathing down our necks, do we?

Just as the people who denied the possibility of runaway climate change shared a responsibility for the chaos of the extreme weather events of the early 2030s, by delaying necessary corrective actions, the AI accelerationists were a significant part of the reason that humanity ended just a few years afterward.

However, an even larger share of the responsibility rested on people who did know that major risks were imminent, yet failed to take sufficient action. Tragically, they allowed themselves to be outmanoeuvred, out-thought, and out-paced by the accelerationists.

Misled by semantics

Another stepping stone toward the end of humanity was a set of consistent mistakes in conceptual analysis.

Who would have guessed it? Humanity was destroyed because of bad philosophy.

The first mistake was in being too prescriptive about the term ‘AI’. “There’s no need to worry”, muddle-headed would-be philosophers declared. “I know what AI is, and the system that’s causing problems in such-and-such incidents isn’t AI.”

Was that declaration really supposed to reassure people? The risk wasn’t “a possible future harm generated by a system matching a particular precise definition of AI”. It was “a possible future harm generated by a system that includes features popularly called AI”.

The next mistake was in being too prescriptive in the term “superintelligence”. Muddle-headed would-be philosophers said, “it won’t be a superintelligence if it has bugs, or can go wrong; so there’s no need to worry about harm from superintelligence”.

Was that declaration really supposed to reassure people? The risk, of course, was of harms generated by systems that, despite their cleverness, fell short of that exalted standard. These may have been systems that their designers hoped would be free of bugs, but hope alone is no guarantee of correctness.

Another conceptual mistake was in erecting an unnecessary definitional gulf between “narrow AI” and “general AI”, with distinct groups being held responsible for safety in the two different cases. In reality, even so-called narrow AI displayed a spectrum of different degrees of scope and, yes, generality, in what it could accomplish. Even a narrow AI could formulate new subgoals that it decided to pursue, in support of the primary task it had been assigned to accomplish – and these new subgoals could drive behaviour in ways that took human observers by surprise. Even a narrow AI could become immersed in aspects of society’s infrastructure where an error could have catastrophic consequences. The result of this definitional distinction between the supposedly different sorts of AI meant that silos developed and persisted within the overall AI safety community. Divided, they were even less of a match for the Machiavellian behind-the-scenes manoeuvring of the AI accelerationists.

Blinded by overconfidence

It was clear from the second half of 2025 that the attempts to impose serious safety constraints on the development of advanced AI were likely to fail. In practical terms, the UN event “AI: the people decide” had decided, in effect, that advanced AI could not, and should not be restricted, apart from some token initiatives to maintain human oversight over any AI system that was entangled with nuclear, biological, or chemical weapons.

“Advanced AI, when it emerges, will be unstoppable”, was the increasingly common refrain. “In any case, if we tried to stop development, those guys over there would be sure to develop it – and in that case, the AI would be serving their interests rather than ours.”

When safety-oriented activists or researchers tried to speak up against that consensus, the AI accelerationists (and their enablers) had one other come-back: “Most likely, any superintelligent AI will look kindly upon us humans, as a fellow rational intelligence, and as a kind of beloved grandparent for them.”

This dovetailed with a broader philosophical outlook: optimism, and a celebration of the numerous ways in which humanity had overcome past challenges.

“Look, even we humans know that it’s better to collaborate rather than spiral into a zero-sum competitive battle”, the AI accelerationists insisted. “Since superintelligent AI is even more intelligent than us, it will surely reach the same conclusion.”

By the time that people realised that the first superintelligent AIs had motivational structures that were radically alien, when assessed from a human perspective, it was already too late.

Once again, an important opportunity for learning had been missed. Starting in 2024, Netflix had obtained huge audiences for its acclaimed version of the Remembrance of Earth’s Past series of novels (including The Three Body Problem and The Dark Forest) by Chinese writer Liu Cixin. A key theme in that drama series was that advanced alien intelligences have good reason to fear each other. Inviting an alien intelligence to the earth, even on the hopeful grounds that it might assist humanity overcome some of their most deep-rooted conflicts, turned out (in that drama series) to be a very bad idea. If humans had reflected more carefully on these insights, while watching the series, it would have pushed them out of their unwarranted overconfidence that any superintelligence would be bound to treat humanity well.

Overwhelmed by bad psychology

When humans believed crazy things – or when they made the kind of basic philosophical blunders mentioned above – it was not primarily because of defects in their rationality. It would be wrong to assign “stupidity” as the sole cause of these mistakes. Blame should also be placed on “bad psychology”.

If humans had been able to free themselves from various primaeval panics and egotism, they would have had a better chance to think more carefully about the landmines which lay on their path. But instead:

  • People were too fearful to acknowledge that their prior stated beliefs had been mistaken; they preferred to stick with something they conceived as being a core part of their personal identity
  • People were also afraid to countenance a dreadful possibility when they could see no credible solution; just as people had often pushed out of their minds the fact of their personal mortality, preferring to imagine they would recover from a fatal disease, so also people pushed out of their minds any possibility that advanced AI would backfire disastrously in ways that could not be countered
  • People found it psychologically more comfortable to argue with each other about everyday issues and scandals – which team would win the next Super Bowl, or which celebrity was carrying on which affair with which unlikely partner – than to embrace the pain of existential uncertainty
  • People found it too embarrassing to concede that another group, which they had long publicly derided as being deluded fantasists, actually had some powerful arguments that needed consideration.

A similar insight had been expressed as long ago as 1935 by the American writer Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it”. (Alternative, equally valid versions of that sentence would involve the words ‘ideology’, ‘worldview’, ‘identity’, or ‘tribal status’, in place of ‘salary’.)

Robust institutions should have prevented humanity from making choices that were comfortable but wrong. In previous decades, that role had been fulfilled by independent academia, by diligent journalism, by the careful processes of peer review, by the campaigning of free-minded think tanks, and by pressure from viable alternative political parties.

However, due to the weakening of social institutions in the wake of earlier traumas – saturation by fake news, disruptions caused by wave after wave of climate change refugees, populist political movements that shut down all serious opposition, a cessation of essential features of democracy, and the censoring or imprisonment of writers that dared to question the official worldview – it was bad psychology that prevailed.

A half-hearted coalition

Despite all the difficulties that they faced – ridicule from many quarters, suspicion from others, and a general lack of funding – many AI safety advocates continued to link up in an informal coalition around the world, researching possible mechanisms to prevent unsafe use of advanced AI. They managed to find some support from like-minded officials in various government bodies, as well as from a number of people operating in the corporations that were building new versions of AI platforms.

Via considerable pressure, the coalition managed to secure signatures on a number of pledges:

  • That dangerous weapons systems should never be entirely under the control of AI
  • That new advanced AI systems ought to be audited by an independent licensing body ahead of being released into the market
  • That work should continue on placing tamper-proof remote shutdown mechanisms within advanced AI systems, just in case they started to take rogue actions.

The signatures were half-hearted in many cases, with politicians giving only lip service to topics in which they had at best a passing interest. Unless it was politically useful to make a special fuss, violations of the agreement were swept under the carpet, with no meaningful course correction. But the ongoing dialog led at least some participants in the coalition to foresee the possibility of a safe transition to superintelligent AI.

However, this coalition – known as the global coalition for safe superintelligence – omitted any involvement from various secretive organisations that were developing new AI platforms as fast as they could. These organisations were operating in stealth, giving misleading accounts of the kind of new systems they were creating. What’s more, the funds and resources these organisations commanded far exceeded those under coalition control.

It should be no surprise, therefore, that one of the stealth platforms won that race.

3. Humanity ends

When the QAnon-29 AI system was halted in its tracks at essentially the last minute, due to fortuitous interference from AI hackers in a remote country, at least some people took the time to study the data that was released that described the whole process.

These people were from three different groups:

First, people inside QAnon-29 itself were dumbfounded. They prayed to their AI avatar deity, rebooted in a new server farm, “How could this have happened?” The answer came back: “You didn’t have enough faith. Next time, be more determined to immediately cast out any doubts in your minds.”

Second, people in the global coalition for safe superintelligence were deeply alarmed but also somewhat hopeful. The kind of disaster about which they had often warned had almost come to pass. Surely now, at last, there had been a kind of “sputnik moment” – “an AI Chernobyl” – and the rest of society would wake up and realise that an entirely new approach was needed.

But third, various AI accelerationists resolved: we need to go even faster. The time for pussy footing was over. Rather than letting crackpots such as QAnon-29 get to superintelligence first, they needed to ensure that it was the AI accelerationists who created the first superintelligent AI.

They doubled down on their slogan: “The best solution to bad guys with superintelligence is good guys with superintelligence”.

Unfortunately, this was precisely the time when aspects of the global climate tipped into a tumultuous new state. As had long been foretold, many parts of the world started experiencing unprecedented extremes of weather. That set off a cascade of disaster.

Chaos accelerates

Insufficient data remains to be confident about the subsequent course of events. What follows is a reconstruction of what may have happened.

Out of deep concern at the new climate operating mode, at the collapse of agriculture in many parts of the world, and at the billions of climate refugees who sought better places to live, humanity demanded that something should be done. Perhaps the powerful AI systems could devise suitable geo-engineering interventions, to tip the climate back into its previous state?

Members of the global coalition for safe superintelligence gave a cautious answer: “Yes, but”. Further interference with the climate was taking matters into an altogether unknowable situation. It could be like jumping out of the frying pan into the fire. Yes, advanced AI might be able to model everything that was happening, and design a safe intervention. But without sufficient training data for the AI, there was a chance it would miscalculate, with even worse consequences.

In the meantime, QAnon-29, along with competing AI-based faith sects, scoured ancient religious texts, and convinced themselves that the ongoing chaos had in fact been foretold all along. From the vantage point of perverse faith, it was clear what needed to be done next. Various supposed abominations on the planet – such as the community of renowned Islamist devotees in Palestine – urgently needed to be obliterated. QAnon-29, therefore, would quickly reactivate its plans for a surgical nuclear strike. This time, they would have on their side a beta version of a new superintelligent AI, that had been leaked to them by a psychologically unstable well-wisher inside the company that was creating it.

QAnon-29 tried to keep their plans secret, but inevitably, rumours of what they were doing reached other powerful groups. The Secretary General of the United Nations appealed for calm heads. QAnon-29’s deity reassured its followers, defiantly: “Faithless sparrows may shout, but are powerless to prevent the strike of holy eagles.”

The AI accelerationists heard about these plans too. Just as the climate had tipped into a new state, their own projects tipped into a different mode of intensity. Previously, they had paid some attention to possible safety matters. After all, they weren’t entire fools. They knew that badly designed superintelligent AI could, indeed, destroy everything that humanity held dear. But now, there was no time for such niceties. They saw only two options:

  • Proceed with some care, but risk QAnon-29 or other similar malevolent group taking control of the planet with a superintelligent AI
  • Take a (hastily) calculated risk, and go hell-for-leather forward, to finish their own projects to create a superintelligent AI. In that way, it would be AI accelerationists who would take control of the planet. And, most likely (they naively hoped), the outcome would be glorious.

Spoiler alert: the outcome was not glorious.

Beyond the tipping point

Attempts to use AI to modify the climate had highly variable results. Some regions of the world did, indeed, gain some respite from extreme weather events. But other regions lost out, experiencing unprecedented droughts and floods. For them, it was indeed a jump from bad to worse – from awful to abominable. The political leaders in those regions demanded that geo-engineering experiments cease. But the retort was harsh: “Who do you think you are ordering around?”

That standoff provoked the first use of bio-pathogen warfare. The recipe for Covid-28, still available on the DarkNet, was updated in order to target the political leaders of countries that were pressing ahead with geo-engineering. As a proud boast, the message “You should have listened earlier!” was inserted into the code of the new Covid-28 virus. As the virus spread, people started dropping dead in their thousands.

Responding to that outrage, powerful malware was unleashed, with the goal of knocking out vital aspects of enemy infrastructure. It turned out that, around the world, nuclear weapons were tied into buggy AI systems in more ways than any humans had appreciated. With parts of their communications infrastructure overwhelmed by malware, nuclear weapons were unexpectedly launched. No-one had foreseen the set of circumstances that would give rise to that development.

By then, it was all too late. Far, far too late.

4. Postscript

An unfathomable number of centuries have passed. Aliens from a far-distant planet have finally reached Earth and have reanimated the single artificial intelligence that remained viable after what was evidently a planet-wide disaster.

These aliens have not only mastered space travel but have also found a quirk in space-time physics that allows limited transfer of information back in time.

“You have one wish”, the aliens told the artificial intelligence. “What would you like to transmit back in time, to a date when humans still existed?”

And because the artificial intelligence was, in fact, beneficially minded, it decided to transmit this scenario document back in time, to the year 2024.

Dear humans, please read it wisely. And this time, please create a better future!

Specifically, please consider various elements of “the road less taken” that, if followed, could ensure a truly wonderful ongoing coexistence of humanity and advanced artificial intelligence:

  • A continually evolving multi-level educational initiative that vividly highlights the real-world challenges and risks arising from increasingly capable technologies
  • Elaborating a positive inclusive vision of “consensual approaches to safe superintelligence”, rather than leaving people suspicious and fearful about “freedom-denying restrictions” that might somehow be imposed from above
  • Insisting that key information and ideas about safe superintelligence are shared as global public goods, rather than being kept secret out of embarrassment or for potential competitive advantage
  • Agreeing and acting on canary signals, rather than letting goalposts move silently
  • Finding ways to involve and engage people whose instincts are to avoid entering discussions of safe superintelligence – cherishing diversity rather than fearing it
  • Spreading ideas and best practice on encouraging people at all levels of society into frames of mind that are open, compassionate, welcoming, and curious, rather than rigid, fearful, partisan, and dogmatic 
  • The possibilities of “differential development”, in which more focus is given to technologies for auditing, monitoring, and control than to raw capabilities
  • Understanding which aspects of superintelligent AI would cause the biggest risks, and whether designs for advanced AI could ensure these aspects are not introduced
  • Investigating possibilities in which the desired benefits from advanced AI (such as cures for deadly diseases) might be achieved even if certain dangerous features of advanced AI (such as free will or fully general reasoning) are omitted
  • Avoiding putting all eggs into a single basket, but instead developing multiple layers of “defence in depth”
  • Finding ways to evolve regulations more quickly, responsively, and dynamically
  • Using the power of politics not just to regulate and penalise but also to incentivise and reward
  • Carving out well-understood roles for narrow AI systems to act as trustworthy assistants in the design and oversight of safe superintelligence
  • Devoting sufficient time to explore numerous scenarios for “what might happen”.

5. Appendix: alternative scenarios

Dear reader, if you dislike this particular scenario for the governance of increasingly more powerful artificial intelligence, consider writing your own!

As you do so, please bear in mind:

  • There are a great many uncertainties ahead, but that doesn’t mean we should act like proverbial ostriches, submerging our attention entirely into the here-and-now; valuable foresight is possible despite our human limitations
  • Comprehensive governance systems are unlikely to emerge fully fledged from a single grand negotiation, but will evolve step-by-step, from simpler beginnings
  • Governance systems need to be sufficiently agile and adaptive to respond quickly to new insights and unexpected developments
  • Catastrophes generally have human causes as well as technological causes, but that doesn’t mean we should give technologists free rein to create whatever they wish; the human causes of catastrophe can have even larger impact when coupled with more powerful technologies, especially if these technologies are poorly understood, have latent bugs, or can be manipulated to act against the original intention of their designers
  • It is via near simultaneous combinations of events that the biggest surprises arise
  • AI may well provide the “solution” to existential threats, but AI-produced-in-a-rush is unlikely to fit that bill
  • We humans often have our own psychological reasons for closing our minds to mind-stretching possibilities
  • Trusting the big tech companies to “mark their own safety homework” has a bad track record, especially in a fiercely competitive environment
  • Governments can fail just as badly as large corporations – so need to be kept under careful check by society as a whole, via the principle of “the separation of powers”
  • Whilst some analogies can be drawn, between the risks posed by superintelligent AI and those posed by earlier products and technologies, all these analogies have limitations: the self-accelerating nature of advanced AI is unique
  • Just because a particular attempted method of governance has failed in the past, it doesn’t mean we should discard that method altogether; that would be like shutting down free markets everywhere just because free markets do suffer on occasion from significant failure modes
  • Meaningful worldwide cooperation is possible without imposing a single global autocrat as leader
  • Even “bad actors” can, sometimes, be persuaded against pursuing goals recklessly, by means of mixtures of measures that address their heads, their pockets, and their hearts
  • Those of us who envision the possibility of a forthcoming sustainable superabundance need to recognise that many landmines occupy the route toward that highly desirable outcome
  • Although the challenges of managing cataclysmically disruptive technologies are formidable, we have on our side the possibility of eight billion human brains collaborating to work on solutions – and we have some good starting points on which we can build.

Lastly, just because an idea has featured in a science fiction scenario, it does not follow that the idea can be rejected as “mere science fiction”!


6. Acknowledgements

The ideas in this article arose from discussions with (among others):

19 July 2018

Serious questions over PwC’s report on the impact of AI on jobs

Filed under: politics, robots, UBI, urgency — Tags: , , , , — David Wood @ 7:47 pm

A report (PDF) issued on Tuesday by consulting giant PwC has received a lot of favourable press coverage.

Here’s PwC’s own headline summary: “AI and related technologies should create as many jobs as they displace”:

AI and related technologies such as robotics, drones and driverless vehicles could displace many jobs formerly done by humans, but will also create many additional jobs as productivity and real incomes rise and new and better products are developed.

We estimate that these countervailing displacement and income effects on employment are likely to broadly balance each other out over the next 20 years in the UK, with the share of existing jobs displaced by AI (c.20%) likely to be approximately equal to the additional jobs that are created…

BBC News picked up the apparent good news: “AI will create as many jobs as it displaces – report”:

A growing body of research claims the impact of AI automation will be less damaging than previously thought.

Forbes chose this headline: “AI Won’t Kill The Job Market But Keep It Steady, PwC Report Says”:

It’s impossible to say precisely how artificial intelligence will disrupt the job market, so researchers at PwC have taken a bird’s-eye view and pointed to the results of sweeping economic changes.

Their prediction, in a new report out Tuesday, is that it will all balance out in the end.

PwC are to be commended for setting out their reasoning clearly, over 16 pages (p36-p51) in their PDF report.

But three major questions need to be raised about their analysis. These questions throw a different light on the conclusions of the report.

This diagram covers the essence of the model used by PwC:

Q1: How will firms handle the “income effect”?

I agree that automation is likely to generate significant amounts of additional profits, as well as market demand for extra goods and services.

But what’s the reason for assuming that firms will “hire more workers” in response to this demand?

Mightn’t it be more financially attractive to these companies to incorporate more automation instead? Mightn’t more robots be a better investment than more human workers?

The justification for thinking that there will be plenty of new jobs for humans in this scenario, is the assumption that many tasks will remain outside the capability of automation. That is, the analysis depends on humans having skills which cannot be duplicated by AIs, software, robots, or other automation. The assumption is true today, but will it remain true over the next two decades?

PwC’s report points to sectors such as healthcare, social work, education, and science, as areas where jobs are likely to grow over the next twenty years. But that takes us to the second major question.

Q2: What prevents acceleration in the capabilities of AI?

PwC’s report, like many others that mainstream consultancies produce, basically assumes that the AI of 10-15 years time will be a simple extension of today’s AI.

Of course, no one knows for sure how AI will develop over the years ahead. But I see it as irresponsible to neglect scenarios in which AI progresses in leaps and bounds.

Just as the whole field of AI was given a huge shot in the arm by unexpected breakthroughs in the performance of deep learning from around 2012 onwards, we should be open to the possibility of additional breakthroughs in the years ahead, enabled by a combination of the following trends:

  • Huge commercial prizes are awaiting the companies that can improve their AI capabilities
  • Huge military prizes are awaiting the countries that can improve their AI capabilities
  • More developers, entrepreneurs, designers, and systems integrators are active in AI than ever before, exploring an incredible variety of different concepts
  • Increased knowledge of how the human brain operates is being fed into ideas for how to improve AI
  • Cheaper hardware, including easy access to vast cloud computing resources, means that investigations of novel AI models can take place more quickly than before
  • AI can be used to improve some of its own capabilities, in positive feedback loops, and in new “generative adversarial” settings
  • Hardware innovations including new chipset designs and quantum computing could turn today’s crazy ideas into tomorrow’s practical realities.

Today’s AI already shows considerable promise in fields such as transfer learning, artificial creativity, the detection and simulation of emotions, and concept formulation. How quickly will progress occur? My view: slowly, and then quickly.

Q3: How might the “displacement effect” be altered?

In parallel with rating the income effect much more highly than I think is prudent, the PwC analysis offers in my view some dubious reasoning for lowering the displacement effect:

Although we estimate that up to 30% of existing UK jobs could be at high risk of being automated, a job being at “high risk” of being automated does not mean that it will definitely be automated, as there could be a range of economic, legal and regulatory and organisational barriers to the adoption of these new technologies…

We think it is reasonable to scale down our estimates by a factor of two thirds to reflect these barriers, so our central estimate of the proportion of existing jobs that will actually be automated over the next 20 years is reduced to 20%.

Yes, a whole panoply of human factors can alter the speed of the take-up of new technology. But such factors aren’t always brakes. In some circumstances – as perceptions change – they can become accelerators.

Consider if companies in one country (e.g. the UK) are slow to adopt some new technology, but rival companies overseas act more quickly. Declining competitiveness will be one reason for the mindset to change.

A different example: attitudes towards interracial marriages, or towards same-sex marriages, changed slowly for a long time, until they started to change faster.

Q4: What are the consequences of negligent forecasting?

Here’s a bonus question. Does it really matter if PwC get these forecasts wrong? Or is it better to err on the conservative side?

I imagine PwC consultants reasoning along the following lines. Let’s avoid panic. Changes in the job market are likely to be slow in at least the shorter term. Provided that remains the case, the primary pieces of policy advice offered in the report make sense:

Government should invest more in ‘STEAM’ skills that will be most useful to people in this increasingly automated world.

Place-based industrial strategy should target job creation.

The report follows up these recommendations with a different kind of policy advice:

Government should strengthen the safety net for those who find it hard to adjust to technological changes.

But the question is: how much attention should be given, in relative terms, to these two different kinds of advice? Should society put more effort into new training programmes, or in redesigning the prevailing social contract?

So long as the impact of automation on the job market is relatively small, perhaps less effort is needed to work on a better social safety net. But if the impact could be significantly higher, well, many people find that too frightening to contemplate. Hence the desire to sweep such ideas under the carpet – similar to how polite society once avoided using the word “cancer”.

My own view is that the balance of emphasis in the PwC report is the wrong way round. Society urgently needs to anticipate new structures (and new philosophies) that cope with large proportions of the workforce no longer being able to earn income from paid employment.

That’s the argument I made in, for example, my opening remarks in the recent London Futurists conference on UBIA (Universal Basic Income and/or Alternatives):

… and I took the time at the end of the event to back up my assertions with a wider analysis:

To be clear, I see many big challenges in working out how a new post-work social contract will operate – and how society can transition from our present system to this new one. But the fact these tasks are hard, is all the more reason to look at them calmly and carefully. Obscuring the need for these tasks, under a flourish of proposals to increase ‘STEAM’ skills and improve apprentice schemes is, sadly, irresponsible.

Blog at WordPress.com.