dw2

6 April 2025

Choose coordination not chaos

Filed under: Abundance, AGI, chaos, collaboration — Tags: , , — David Wood @ 10:46 pm

Note: this document is subject to change as more feedback is received. Check back for updates later. This is version 1.1c.

Preamble

A critically important task in the coming months is to inspire a growing wave of people worldwide to campaign for effective practical coordination of the governance of advanced AI. That’s as an alternative to leaving the development and deployment of advanced AI to follow its present chaotic trajectory.

The messages that need to be conveyed, understood, and acted upon, are that

  • The successful governance of advanced AI will result in profound benefit for everyone, whereas a continuation of the present chaotic state of affairs risks global catastrophe
  • The successful governance of advanced AI isn’t some impossible dream, but lies within humanity’s grasp
  • Nevertheless, real effort, real intelligence, and, yes, real coordination will be needed, so humanity can reach a world of unprecedented abundance, rather than sleepwalk into disaster.

What would have great value is a campaign slogan that conveys the above insights, and which is uplifting, unifying, easily understood, and forward-looking. The right slogan would go viral, and would galvanise people in all walks of life to take positive action.

To that end, that slogan should ideally be

  • Memorable and punchy
  • Emotionally resonant
  • Credibly audacious
  • Universally understandable
  • Open-ended enough to invite participation

To be clear, that slogan should not cause panic or despair, but should put people into a serious frame of mind.

A specific proposal

After several days of online brainstorming, in which numerous ideas were presented and discussed, I now present what I consider the best option so far.

Choose coordination not chaos, so AI brings abundance for all

If there is a need for a three-word version of this, options include

  • Coordination not chaos
  • Choose AI abundance

Of course, what’s needed isn’t just a standalone slogan. Therefore, please find here also a sample illustrative image and, most important, a set of talking points to round out the concept.

About the image

Here’s the commentary by ChatGPT, when asked to suggest an image to illustrate this campaign slogan:

Concept: Two contrasting futures from a single branching path.

Scene:

  • A wide landscape split into two diverging paths or realms — one vibrant and coordinated, the other chaotic and fragmented.
  • In the coordinated half:
    • A harmonious world — sustainable cities, diverse communities collaborating with AI, lush green spaces, clean tech, and open exchanges of knowledge and creativity.
    • Subtle signs of AI woven into infrastructure: responsive lighting, robotic assistants, AI-powered transport.
  • In the chaotic half:
    • A fractured world — disconnected enclaves, pollution, conflict, neglected tech, and isolated individuals overwhelmed by noise or misinformation.
    • AI appears uncontrolled — surveillance drones, malfunctioning robots, or broken screens.

Central focus:

  • A group of people at the fork in the path, pointing and stepping toward the coordinated future, with calm, confident AI assistants guiding the way.

(Aside: although the actual image produced arguably needs more work, the concept described by ChatGPT is good. And it’s reassuring that the slogan, by itself, produced a flow of ideas resonant with the intended effect.)

Talking points

The talking points condensed to a single slide:

And now in more detail:

1. Humanity’s superpower: coordination

Humanity’s most important skill is sometimes said to be our intelligence – our ability to understand, and to make plans in order to achieve specific outcomes.

But another skill that’s at least as important is our ability to coordinate, that is, our ability to:

  • Share insights with each other
  • Operate in teams where people have different skills
  • Avoid needless conflict
  • Make and uphold agreements
  • Accept individual constraints on our action, with the expectation of experiencing greater freedom overall.

Coordination may be informal or formal. It can be backed up by shared narratives and philosophies, by legal systems, by the operation of free markets, by councils of elders, and by specific bodies set up to oversee activities at local, regional, or international levels.

Here are some examples of types of agreements on individual constraints for shared mutual benefit:

  • Speed limits for cars, to reduce the likelihood of dangerous accidents
  • Limits on how much alcohol someone can drink before taking charge of a car
  • Requirements to maintain good hygiene during food preparation
  • Requirements to assess the safety of a new pharmaceutical before deploying it widely
  • Prohibitions against advertising that misleads consumers into buying faulty goods
  • Rules preventing over-fishing, or the overuse of shared “commons” resources
  • Rules of various sports and games – and agreed sanctions on any cheaters
  • Prohibitions against politicians misleading parliaments – and agreed sanctions on any cheaters
  • Prohibitions against the abuse of children
  • Rules governing the conduct of soldiers – which apply even in times of war
  • Restrictions on the disposal of waste
  • Rules governing ownership of dangerous breeds of dog
  • Rules governing the spread of dangerous materials, such as biohazards

Note that coordination is often encouraging rather than restrictive. This includes

  • Prizes and other explicit incentives
  • Implicit rewards for people with good reputation
  • Market success for people with good products and services

The fact that specific coordination rules and frameworks have their critics doesn’t mean that the whole concept of coordination should be rejected. It just means that we need to keep revising our coordination processes. That is, we need to become better at coordinating.

2. Choosing coordination, before chaos ensues

When humanity uncovers new opportunities, it can take some time to understand the implications and to create or update the appropriate coordination rules and frameworks for these opportunities:

  • When settlers on the island of Mauritius discovered the dodo – a large, flightless bird – they failed to put in place measures to prevent that bird becoming extinct only a few decades later
  • When physicists discovered radioactivity, it took some time to establish processes to reduce the likelihood that researchers would develop cancer due to overexposure to dangerous substances
  • Various new weapons (such as chemical gases) were at first widely used in battle zones, before implicit and then explicit agreement was reached not to use such weapons
  • Surreptitious new doping methods used by athletes to gain extra physical advantage result, eventually, in updates to rules on monitoring and testing
  • Tobacco was widely used – and even encouraged, sometimes by medical professionals – before society decided to discourage its use (against the efforts of a formidable industry)
  • Similar measures are now being adopted, arguably too slowly, against highly addictive food products that are thought to cause significant health problems
  • New apps and online services which spread hate speech and other destabilising misinformation surely need some rules and restrictions too, though there is considerable debate over what form of governance is needed.

However, if appropriate coordination is too slow to be established, or is too weak, or exists in words only (without the backup of meaningful action against rules violators), the result can be chaos:

  • Rare animals are hunted to extinction
  • Fishing stocks are depleted to the extent that the livelihood of fishermen is destroyed
  • Economic transactions have adverse negative externalities on third parties
  • Dangerous materials, such as microplastics, spread widely in the environment
  • No-one is sure what rules apply in sports, and which rules will be enforced
  • Normal judiciary processes are subverted in favour of arbitrary “rule of the in-group”
  • Freedoms previously enjoyed by innovative new start-ups are squelched by the so-called “crony capitalism” of monopolies and cartels linked to the ruling political regime
  • Literal arms races take place, with ever-more formidable weapons being rushed into use
  • Similar races take place to bring new products to market without adequate safety testing

Groups of people who are (temporarily) faring well from the absence of restraints on their action are likely to oppose rules that alter their behaviour. That’s the experience of nearly every industry whose products or services were discovered to have dangerous side-effects, but where insiders fought hard to suppress the evidence of these dangers.

Accordingly, coordination does not arise by default. It needs explicit choice, backed up by compelling analysis, community engagement, and strong enforcement.

3. Advanced AI: the promise and the peril

AI could liberate humanity from many of our oldest problems.

Despite huge progress of many kinds over the centuries, humans still often suffer grievously on account of various aspects of our nature, our environment, our social norms, and our prevailing philosophies. Specifically, we are captive to

  • Physical decline and aging
  • Individual and collective mental blindspots and cognitive biases (“stupidity”)
  • Dysfunctional emotions that render us egotistical, depressed, obsessive, and alienated
  • Deep psychosocial tendencies toward divisiveness, xenophobia, deception, and the abuse of power

However, if developed and used wisely, advanced AI can enable rejuvenation and enhancement of our bodies, minds, emotions, social relations, and our links to the environment (including the wider cosmos):

  • AI can accelerate progress with nanotech, biotech, and cognotech
  • In turn, these platform technologies can accelerate progress with abundant low-cost clean energy, nutritious food, healthcare, education, security, creativity, spirituality, and the exploration of marvellous inner and outer worlds

In other words, if developed and used wisely, advanced AI can set humanity free to enjoy much better qualities of life:

However, if developed and used unwisely, advanced AI is likely to cause catastrophe:

  • Via misuse by people who are angry, alienated, or frustrated
  • Via careless use by people who are naive, overconfident, or reckless
  • Via AI operating beyond our understanding and control
  • Via autonomous AI adopting alien modes of rationality and alien codes of ethics

The key difference between these two future scenarios is whether the development and use of AI is wisely steered, or instead follows a default path of deprioritising any concerns about safety:

  • The default path involves AI whose operation is opaque, which behaves deceptively, which lacks moral compass, which can be assigned to all kinds of tasks with destructive side-effects, and which often disregards human intentions
  • Instead, if AI is wisely harnessed, it will deliver value as a tool, but without any intrinsic agency, autonomy, volition, or consciousness
  • Such a tool can have high creativity, but won’t use that creativity for purposes opposed to human wellbeing

To be clear, there is no value in winning a reckless race to be the first to create AI with landmark new features of capability and agency. Such a race is a race to oblivion, also known as a suicide race.

4. The particular hazards of advanced AI

The dangers posed by AI don’t arise from AI in isolation. They involve AI in the hands of fallible, naïve, over-optimistic humans, who are sometimes driven by horrible internal demons. It’s AI summoned and used, not by the better angels of human nature, but by the darker corners of our psychology.

Although we humans are often wonderful, we sometimes do dreadful things to each other – especially when we have become angry, alienated, or frustrated. Add in spiteful ideologies of resentment and hostility, and things can become even uglier.

Placing technology in the hands of people in their worst moments can lead to horrific outcomes. The more powerful the technology, the bigger the potential abomination:

  • The carnage of a frenzied knife attack or a mass shooting (where the technology in question ranges from a deadly sharp knife to an automatic rifle)
  • The chaos when motor vehicles are deliberately propelled at speed into crowds of innocent pedestrians
  • The deaths of everyone on board an airplane, when a depressed air pilot ploughs the craft into a mountainside or deep into an ocean, in a final gesture of defiance to what they see as an unfair, uncaring world
  • The destruction of iconic buildings of a perceived “great satan”, when religious fanatics have commandeered jet airliners in service of the mental pathogen that has taken over their minds
  • The assassination of political or dynastic rivals, by the mixing of biochemicals that are individually harmless, but which in combination are frightfully lethal
  • The mass poisoning of commuters in a city subway, when deadly chemicals are released at the command of a cult leader who fancies himself as the rightful emperor of Japan, and who has beguiled clearly intelligent followers to trust his every word.

How does advanced AI change this pattern of unpleasant possibilities? How is AI a significantly greater threat than earlier technologies? In six ways:

  1. As AI-fuelled automation displaces more people from their work (often to their surprise and shock), it predisposes more people to become bitter and resentful
  2. AI is utilised by merchants of the outrage industrial complex, to convince large numbers of people that their personal circumstance is more appalling than they had previously understood, that a contemptible group of people over there are responsible for this dismal turn of events, and that the appropriate response is to utterly defeat those deplorables
  3. Once people are set on a path to obtain revenge, personal recognition, or just plain pandemonium, AIs can make it much easier for them to access and deploy weapons of mass intimidation and mass destruction
  4. Due to the opaque, inscrutable nature of many AI systems, the actual result of an intended outrage may be considerably worse even than what the perpetrator had in mind; this is similar to how malware sometimes causes much more turmoil than the originator of that malware intended
  5. An AI with sufficient commitment to the goals it has been given will use all its intelligence to avoid being switched off or redirected; this multiplies the possibility that an intended local outrage might spiral into an actual global catastrophe
  6. An attack powered by fast-evolving AI can strike unexpectedly at core aspects of the infrastructure of human civilization – our shared biology, our financial systems, our information networks, or our hair-trigger weaponry – exploiting any of the numerous fragilities in these systems.

And it’s not just missteps from angry, alienated, frustrated people, that we have to worry about. We also need to beware potential cascades of trouble triggered by the careless actions of people who are well-intentioned, but naive, over-optimistic, or simply reckless, in how they use AI.

The more powerful the AI, the greater the dangers.

Finally, the unpredictable nature of emergent intelligence carries with it another fearsome possibility. Namely, a general intelligence with alien thinking modes far beyond our own understanding, might decide to adopt an alien set of ethics, in which the wellbeing of eight billion humans merits only a miniscule consideration.

That’s the argument against simply following a default path of “generate more intelligence, and trust that the outcome is likely to be beneficial for humanity”. It’s an argument that should make everyone pause for thought.

5. A matter of real urgency

How urgent is the task of improving global coordination of the governance of advanced AI?

It is sometimes suggested that progress with advanced AI is slowing down, or is hitting some kind of “wall” or other performance limit. There may be new bottlenecks ahead. Or diseconomies of scale may supersede the phenomenon of economies of scale which has characterised AI research over the last few years.

However, despite these possibilities, the case remains urgent:

  • Even if one approach to improving AI runs out of steam, huge numbers of researchers are experimenting with promising new approaches, including approaches that combine current state-of-the-art methods into new architectures
  • Even if AI stops improving, it is already dangerous enough to risk incidents in which large numbers of people are harmed
  • Even if AI stops improving, clever engineers will find ways to take better advantage of it – thereby further increasing the risks arising, if it is badly configured or manifests unexpected behaviour
  • There is no guarantee that AI will actually stop improving; making that assumption is too much of a risk to take on behalf of the entirety of human civilisation
  • Even if it will take a decade or longer for AI to reach a state in which it poses true risks of global catastrophe, it may also take decades for governance systems to become effective and practical; the lesson from ineffective efforts to prevent runaway climate change are by no means encouraging here
  • Even apart from the task of coordinating matters related to advanced AI, human civilisation faces other deep challenges that also require effective coordination on the global scale – coordination that, as mentioned, is currently failing on numerous grounds.

So, there’s an imperative to “choose coordination not chaos” independent of considering the question of whether advanced AI will lead to abundance or to a new dark age.

6. A promising start and an unfortunate regression

Humanity actually made a decent start in the direction of coordinating the development of advanced AI, at the Global AI Safety Summits in the UK (November 2023) and South Korea (May 2024).

Alas, the next summit in that series, in Paris (February 2025) was overtaken by political correctness, by administrivia, by virtue signalling, and, most of all, by people with a woefully impoverished understanding of the existential opportunities and risks of advanced AI. Evidently, the task of raising true awareness needs to be powerfully re-energised.

There’s still plenty of apparent global cooperation taking place – lots of discussions and conferences and summits, with people applauding the fine-sounding words in each other’s speeches. “Justice and fairness, yeah yeah yeah!” “Transparency and accountability, yeah yeah yeah!” “Apple pie and blockchain, yeah yeah yeah!” “Intergenerational intersectionality, yeah yeah yeah!”

But the problem is the collapse of effective, practical global cooperation, regarding the hard choices about which aspects of advanced AI should be promoted, and which should be restricted.

Numerous would-be coordination bodies are struggling with the same set of issues:

  • It’s much easier to signal virtue than to genuinely act virtuously.
  • Too many of the bureaucrats who run these bodies are out of their depth when it comes to understanding the existential opportunities and risks of advanced AI.
  • Seeing no prospect of meaningful coordination, many of the big tech companies invited to participate do so in a way that obfuscates the real issues while maintaining their public image as “trying their best to do good”.
  • The process is undermined by people who can be called “reckless accelerationists” – people who are willing to gamble that the chaotic processes of creating advanced AI as quickly as possible will somehow result in a safe, beneficial outcome (and, in some cases, these accelerationists would even take a brief perverted pleasure if humanity were rendered extinct by a non-sentient successor AI species); the accelerationists don’t want the public as a whole to be in any position to block their repugnant civilisational Russian roulette.

How to address this dilemma is arguably the question that should transcend all others, regarding the future of humanity.

7. Overcoming the obstacles to effective coordination of the governance of advanced AI

To avoid running aground on the same issues as in the past, it’s important to bear in mind the five main reasons for the failure, so far, of efforts to coordinate the governance of advanced AI. They are:

  • Fear that attempts to control the development of AI will lead to an impoverished future, or a future in which the world is controlled by people from a different nation (e.g. China)
  • Lack of appreciation of the grave perils of the current default chaotic course
  • A worry that any global coordination would lurch toward a global dictatorship, with its own undeniable risks of catastrophe
  • The misapprehension that, without the powers of a global dictatorship, any attempts at global coordination are bound to fail, so they are a waste of time
  • The power that Big Tech possesses, allowing it to ignore half-hearted democratic attempts to steer their activities.

In broad terms, these obstacles can be overcome as follows:

  • Emphasising the positive outcomes, including abundance, freedom, and all-round wellbeing – and avoiding the psychologically destabilising outlook of “AI doomers”
  • Increasing the credibility and relatability of scenarios in which ungoverned advanced AI leads to catastrophe – but also the credibility and relatability of scenarios in which humanity’s chaotic tendencies can be overcome
  • Highlighting previous examples when the governance of breakthrough technology was at least partially successful, rather than developers being able to run amok – examples such as genetic recombination therapies, nuclear proliferation, and alternatives to the chemicals that caused the hole in the ozone layer
  • Demonstrating the key roles that decentralised coordination should play, as a complement to the centralised roles that nation states can play
  • Clarifying how global coordination of advanced AI can start with small agreements and then grow in scale, without individual countries losing sovereignty in any meaningful way.

8. Decentralised reputation management – rewards for good behaviour

What is it that leads individuals to curtail their behaviour, in conformance with a set of standards promoted in support of a collaboration?

In part, it is the threat of sanction or control – whereby an individual might be fined or imprisoned for violating the agreed norms.

But in part, it is because of reputational costs when standards are ignored, side-lined, or cheated. The resulting loss of reputation can result in declining commercial engagement or reduced social involvement. Cheaters and freeloaders risk being excluded from future new opportunities available to other community members.

These reinforcement effects are strongest when the standards received community-wide support while being drafted and adopted – rather than being imposed by what could be seen as outside forces or remote elites.

Some reputation systems operate informally, especially in small or local settings. For activities with a wider involvement, online rating systems can come into their own. For example, consider the reputation systems for reviews of products, in which the reputation of individual reviewers changes the impact of various reviews. There are similarities, as well, to how webpages are ranked when presented in response to search queries: pages which have links from others with high reputation tend in consequence to be placed more prominently in the listing.

Along these lines, reputational ratings can be assigned, to individuals, organisations, corporations, and countries, based on their degree of conformance to agreed principles for trustworthy coordinated AI. Entities with poor AI coordination ratings should be shunned. Other entities that fail to take account of AI coordination ratings when picking suppliers, customers, or partners, should in turn be shunned too. Conversely, entities with high ratings should be embraced and celebrated.

An honest, objective assessment of conformance to the above principles should become more significant, in determining overall reputation, than, for example, wealth, number of online followers, or share price.

Emphatically, the reputation score must be based on actions, not words – on concrete, meaningful steps rather than behind-the-scenes fiddling, and on true virtue rather than virtue-signalling. Accordingly, deep support should be provided for any whistleblowers who observe and report on any cheating or other subterfuge.

In summary, this system involves:

  • Agreement on which types of AI development and deployment to encourage, and which to discourage, or even ban
  • Agreement on how to assign reputational scores, based on conformance to these standards
  • Agreement on what sanctions are appropriate for entities with poor reputations – and, indeed, what special rewards should flow to entities with good reputations.

All three elements on this system need to evolve, not under the dictation of central rulers, but as a result of a grand open conversation, in which ideas rise to the surface if they make good sense, rather than being shouted with the loudest voice.

That is, decentralised mechanisms have a vital role to play in encouraging and implementing wise coordination of advanced AI. But centralised mechanisms have a vital role too, as discussed next.

9. Starting small and then growing in scale

If someone continues to ignore social pressures, and behaves irresponsibly, how can the rest of society constrain them? Ultimately, force needs to be applied. A car driver who recklessly breaks speed limits will be tracked down, asked to stop, and if need be, will be forced off the road. A vendor who recklessly sells food prepared in unhygienic conditions will be fined, forbidden to set up new businesses, and if need be, will be imprisoned. Scientists who experiment with highly infectious biomaterials in unsafe ways will lose their licence and, if need be, their laboratories will be carefully closed down.

That is, society is willing to grant special powers of enforcement to some agents acting on behalf of the entire community.

However, these special powers carry their own risks. They can be abused, in order to support incumbent political leaders against alternative ideas or opposition figures.

The broader picture is as follows: Societies can fail in two ways: too little centralised power, and too much centralised power.

  • In the former case, societies can end up ripped apart by warring tribes, powerful crime families, raiding gangs from neighbouring territories, corporations that act with impunity, and religious ideologues who stamp their contentious visions of “the pure and holy” on unwilling believers and unbelievers alike
  • In the latter case, a state with unchecked power diminishes the rights of citizens, dispenses with the fair rule of law, imprisons potential political opponents, and subverts economic flows for the enrichment of the leadership cadre.

The healthiest societies, therefore, possess both a strong state and a strong civil society. That’s one meaning of the celebrated principle of the separation of powers. The state is empowered to act, decisively if needed, against any individual cancers that would threaten the health of the community. But the state is informed and constrained by independent, well-organised judiciary, media, academia, credible opposition parties, and other institutions of civil society.

It should be the same with the governance of potential rogue or naïve AI developers around the world. Via processes of decentralised deliberations, taking account of input from numerous disciplines, agreement should be reached on which limits are vital to be observed.

Inevitably, different participants in the process will have different priorities for what the agreements should contain. In some cases, these limits imposed might vary between different jurisdictions, within customisation frameworks agreed globally. But there should be clear acceptance that some ways of developing or deploying advanced AIs need to be absolutely prevented. To prevent the agreements from unravelling at the earliest bumps in the road, it will be important that agreements are reached unanimously among the representatives of the jurisdictions where the most powerful collections of AI developers are located.

The process to reach agreement can be likened to the deliberations of a jury in a court case. In most cases, jury members with initially divergent opinions eventually converge on a conclusion. In cases when the process becomes deadlocked, it can be restarted with new representative participants. With the help of expert facilitators – themselves supported by excellent narrow AI tools – creative new solutions can be introduced for consideration, making an ultimate agreement more likely.

To start with, these agreements might be relatively small in scope, such as “don’t place the launch of nuclear weapons under AI control”. Over time, as confidence builds, the agreements will surely grow. That’s because of the shared recognition that so much is at stake.

Of course, for such agreements to be meaningful, there needs to be a reliable enforcement mechanism. That’s where the state needs to act – with the support and approval of civil society.

Within entire countries that sign up to this AI coordination framework, enforcement is relatively straightforward. The same mechanisms that enforce other laws can be brought to bear against any rogue or naïve AI developers.

The challenging part is when countries fail to sign up to this framework, or do so deceitfully, that is, with no intention of keeping their promises. In such a case, it will fall to other countries to ensure conformance, via, in the first place, measures of economic sanction.

To make this work, all that’s necessary is that a sufficient number of powerful countries sign up to this agreement. For example, if the G7 do so, plus China and India, along with countries that are “bubbling under” G7 admission (like Australia, South Korea, and Brazil), that should be sufficient. Happily, there are many AI experts in all these countries who have broad sympathies to the kinds of principles spelt out in this document.

As for potential maverick nations such as Russia and North Korea, they will have to weigh up the arguments. They should understand – like all other countries – that respecting such agreements is in their own self-interest. To help them reach such an understanding, appropriate pressure from China, the USA, and the rest of the world should make a decisive difference.

This won’t be easy. At this pivotal point of history, humanity is being challenged to use our greatest strength in a more profound way than ever before – namely, our ability to collaborate despite numerous differences. On reflection, it shouldn’t be a surprise that the unprecedented challenges of advanced AI technology will require an unprecedented calibre of human collaboration.

If we fail to bring together our best talents in a positive collaboration, we will, sadly, fulfil the pessimistic forecast of the eighteenth-century Anglo-Irish statesman Edmund Burke, paraphrased as follows: “The only thing necessary for the triumph of evil is that good men fail to associate, and do nothing”. (The original quote is this: “No man … can flatter himself that his single, unsupported, desultory, unsystematic endeavours are of power to defeat the subtle designs and united cabals of ambitious citizens. When bad men combine, the good must associate; else they will fall, one by one, an unpitied sacrifice in a contemptible struggle.”) Or, updating the wording slightly, “The only thing necessary for chaos to prevail is that good men fail to coordinate wisely”.

A remark from the other side of the Atlantic from roughly the same time, attributed to Benjamin Franklin, conveys the same thought in different language: “We must… all hang together, or assuredly we shall all hang separately”.

10. Summary: The nucleus of a wider agreement, and call to action

Enthusiasm for agreements to collaborate on the governance of advanced AIs will grow as a set of insights are understood more widely and more deeply. These insights can be stated as follows:

  1. It’s in the mutual self-interest of every country to constrain the development and deployment of what could become catastrophically dangerous AI; that is, there’s no point in winning what could be a reckless suicide race to create powerful new types of AI before anyone else
  2. The major economic and humanitarian benefits that people hope will be delivered by the hasty development of advanced AI (benefits including all-round abundance, as well as solutions to various existential risks), can in fact be delivered much more reliably by AI systems that are constrained, and by development systems that are coordinated rather than chaotic
  3. A number of attractive ideas already exist regarding potential policy measures (regulations and incentives) which can be adopted, around the world, to prevent the development and deployment of what could become catastrophic AI – for example, measures to control the spread and use of vast computing resources, or to disallow AIs that use deception to advance their goals
  4. A number of good ideas also exist and are ready to be adopted around the world, regarding options for monitoring and auditing, to ensure the strict application of the agreed policy measures – and to prevent malign action by groups or individuals that have, so far, failed to sign up to these policies, or who wish to cheat them
  5. All of the above can be achieved without any detrimental loss of individual sovereignty: the leaders of countries can remain masters within their own realms, as they desire, provided that the above basic AI coordination framework is adopted and maintained
  6. All of the above can be achieved in a way that supports evolutionary changes in the AI coordination framework as more insight is obtained; in other words, this system can (and must) be agile rather than static
  7. Even though this coordination framework is yet to be fully agreed, there are plenty of ideas for how it can be rapidly developed, so long as that project is given sufficient resources, and the best brains from multiple disciplines are encouraged to give it their full attention
  8. Ring-fencing sufficient resources to further develop this AI coordination framework, and associated reputational ratings systems, should be a central part of every budget
  9. Reputational ratings that can be assigned, based on the above principles, will play a major role in altering behaviours of the many entities involved in the development and deployment of advanced AI.

Or, to summarise this summary: Choose coordination not chaos, so AI brings abundance for all.

Now is the time to develop these ideas further (by all means experiment with ways to simplify their expression), to find ways to spread them more effectively, and to be alert for newer, better insights that arise from the resulting open global conversation.

Other ideas considered

The ideas presented above deserve attention, regardless of which campaign slogans are adopted.

For comparison, here is a list of other possible campaign slogans, along with reservations that have been raised about each of them:

  • “Pause AI” (too negative)
  • “Control AI” (too negative)
  • “Keep the Future Human” (insufficiently aspirational)
  • “Take Back Control from Big Tech” (doesn’t characterise the problem accurately enough)
  • “Safe AI for sustainable superabundance” (overly complex concepts)
  • “Choose tool AI instead of AGI” (lacks a “why”)
  • “Kind AI for a kinder world” (perhaps too vague)
  • “Narrow AI to broaden humanity’s potential” (probably too subtle)
  • “Harness AI to liberate humanity” (terminology overly scholarly or conceptual).

Also for comparison, consider the following set of slogans from other fields:

  • “Yes we can” (Barack Obama, 2008)
  • “Make America great again” (Donald Trump, 2016)
  • “Take back control” (UK Brexit slogan)
  • “Think different” (Apple)
  • “Because you’re worth it” (L’Oréal)
  • “Black lives matter”
  • “Make love, not war”
  • “For the Many, Not the Few” (Jeremy Corbyn, 2017)
  • “Get Brexit done” (Boris Johnston, 2019)
  • “Not Me. Us” (Bernie Sanders, 2020)
  • “We shall fight them on the beaches” (Winston Churchill, 1940)
  • “It’s Morning Again in America” (Ronald Reagan, 1984)
  • “Stay Home. Save Lives” (Covid-19 messaging)
  • “Clunk click every trip” (encouraging the use of seat belts in cars)
  • “We go to the moon, not because it is easy, but because it is hard” (JFK, 1962)
  • “A microcomputer on every desk and in every home running Microsoft software” (Bill Gates, 1975)
  • “To organise the world’s information and make it universally accessible and useful” (Google, 1998)
  • “Accelerating the world’s transition to sustainable energy” (Tesla, 2016)
  • “Workers of the world, unite – you have nothing to lose but your chains” (Karl Marx, 1848)
  • “From each according to his ability, to each according to his needs” (Karl Marx, 1875)

Comments are welcome on any ideas in this article. Later revisions of this article may incorporate improvements arising from these comments.

Postscript

New suggestions under consideration, following the initial publication of this article:

  • “Harness AI now” (Robert Whitfield)

17 November 2024

Preventing unsafe superintelligence: four choices

More and more people have come to the conclusion that artificial superintelligence (ASI) could, in at least some circumstances, pose catastrophic risks to the wellbeing of billions of people around the world, and that, therefore, something must be done to reduce these risks.

However, there’s a big divergence of views about what should be done. And there’s little clarity about the underlying assumptions on which different strategies depend.

Accordingly, I seek in this article to untangle some of choices that need to be made. I’ll highlight four choices that various activists promote.

The choices differ regarding the number of different organisations worldwide that are envisioned as being legally permitted to develop and deploy what could become ASI. The four choices are:

  1. Accept that many different organisations will each pursue their own course toward ASI, but urge each of them to be very careful and to significantly increase the focus on AI safety compared to the present situation
  2. Seek to restrict to just one organisation in the world any developments that could lead to ASI; that’s in order to avoid dangerous competitive race dynamics if there is more than one such organisation
  3. Seek agreements that will prevent any organisation, anywhere in the world, from taking specific steps that might bring about ASI, until such time as it has become absolutely clear how to ensure that ASI is safe
  4. Seek a global pause on any platform-level improvements on AI capability, anywhere in the world, until it has become absolutely clear that these improvements won’t trigger a slippery slope to the emergence of ASI.

For simplicity, these choices can be labelled as:

  1. Be careful with ASI
  2. Restrict ASI
  3. Pause ASI
  4. Pause all new AI

It’s a profound decision for humanity to take. Which of the four doors should we open, and which of four corridors should we walk down?

Each of the four choices relies on some element of voluntary cooperation, arising out of enlightened self-interest, and on some element of compulsion – that is, national and international governance, backed up by sanctions and other policies.

What makes this decision hard is that there are strong arguments against each choice.

The case against option 1, “Be careful with ASI”, is that at least some organisations (including commercial entities and military groups) are likely to cut corners with their design and testing. They don’t want to lose what they see as a race with existential consequences. The organisations that are being careful will lose their chance of victory. The organisations that are, instead, proceeding gung ho, with lesser care, may imagine that they will fix any problems with their AIs when these flaws become apparent – only to find that there’s no way back from one particular catastrophic failure.

As Sam Altman, CEO of OpenAI, has said: it will be “lights out for all of us”.

The case against each of the remaining three options is twofold:

  • First, in all three cases, they will require what seems to be an impossible degree of global cooperation – which will need to be maintained for an implausibly long period of time
  • Second, such restrictions will stifle the innovative development of the very tools (that is, advanced AI) which will actually solve existential problems (including the threat of rogue ASI, as well as the likes of climate change, cancer, and aging), rather than making these problems worse.

The counter to these objections is to make the argument that a sufficient number of the world’s most powerful countries will understand the rationale for such an agreement, as something that is in their mutual self-interest, regardless of the many other differences that divide them. That shared understanding will propel them:

  • To hammer out an agreement (probably via a number of stages), despite undercurrents of mistrust,
  • To put that agreement into action, alongside measures to monitor conformance, and
  • To prevent other countries (who have not yet signed up to the agreement) from breaching its terms.

Specifically, the shared understanding will cover seven points:

  1. For each of the countries involved, it is in their mutual self-interest to constrain the development and deployment of what could become catastrophically dangerous ASI; that is, there’s no point in winning what will be a suicide race
  2. The major economic and humanitarian benefits that they each hope could be delivered by advanced AI (including solutions to other existential risk), can in fact be delivered by passive AIs which are restricted from reaching the level of ASI
  3. There already exist a number of good ideas regarding potential policy measures (regulations and incentives) which can be adopted, around the world, to prevent the development and deployment of catastrophically dangerous AI – for example, measures to control the spread and use of vast computing resources
  4. There also exist a number of good ideas regarding options for monitoring and auditing which can also be adopted, around the world, to ensure the strict application of the agreed policy measures – and to prevent malign action by groups or individuals that have, so far, failed to sign up to the policies
  5. All of the above can be achieved without any detrimental loss of individual sovereignty: the leaders of these countries can remain masters within their own realms, as they desire, provided that the above basic AI safety framework is adopted and maintained
  6. All of the above can be achieved in a way that supports evolutionary changes in the AI safety framework as more insight is obtained; in other words, this system can (and must) be agile rather than static
  7. Even though the above safety framework is yet to be fully developed and agreed, there are plenty of ideas for how it can be rapidly developed, so long as that project is given sufficient resources.

The first two parts of this shared seven-part understanding are particularly important. Without the first part, there will be an insufficient sense of urgency, and the question will be pushed off the agenda in favour of other topics that are more “politically correct” (alas, that is a common failure mode of the United Nations). Without the second part, there will be an insufficient enthusiasm, with lots of backsliding.

What will make this vision of global collaboration more attractive will be the establishment of credible “benefit sharing” mechanisms that are designed and enshrined into international mechanisms. That is, countries which agree to give up some of their own AI development aspirations, in line with the emerging global AI safety agreement, will be guaranteed to receive a substantive share of the pipeline of abundance that ever more powerful passive AIs enable humanity to create.

To be clear, this global agreement absolutely needs to include both the USA and China – the two countries that are currently most likely to give birth to ASI. Excluding one or the other will lead back to the undesirable race condition that characterises the first of the four choices open to humanity – the (naïve) appeal for individual organisations simply to “be careful”.

This still leaves a number of sharp complications.

First, note that the second part of the above shared seven-part agreement – the vision of what passive AIs can produce on behalf of humanity – is less plausible for Choice 4 of the list shown earlier, in which there is a global pause on any platform-level improvements on AI capability, anywhere in the world, until it has become absolutely clear that these improvements won’t trigger a slippery slope to the emergence of ASI.

If all improvements to AI are blocked, out of a Choice 4 message of “overwhelming caution”, it will shatter the credibility of the idea that today’s passive AI systems can be smoothly upgraded to provide humanity with an abundance of solutions such as green energy, nutritious food, accessible healthcare, reliable accommodation, comprehensive education, and more.

It will be a much harder sell, to obtain global agreement to that more demanding restriction.

The difference between Choice 4 and Choice 3 is that Choice 3 enumerates specific restrictions on the improvements permitted to be made to today’s AI systems. One example of a set of such restrictions is given in “Phase 0: Safety” of the recently published project proposal A Narrow Path (produced by ControlAI). Without going into details here, let me simply list some of the headlines:

  • Prohibit AIs capable of breaking out of their environment
  • Prohibit the development and use of AIs that improve other AIs (at machine speed)
  • Only allow the deployment of AI systems with a valid safety justification
  • A licensing regime and restrictions on the general intelligence of AI systems
    • Training Licence
    • Compute Licence
    • Application Licence
  • Monitoring and Enforcement

Personally, I believe this list is as good a starting point as any other than I have seen so far.

I accept, however, that there are possibilities in which other modifications to existing AI systems could unexpectedly provide these systems with catastrophically dangerous capabilities. That’s because we still have only a rudimentary understanding of:

  1. How new AI capabilities sometimes “emerge” from apparently simpler systems
  2. The potential consequences of new AI capabilities
  3. How complicated human general reasoning is – that is, how large is the gap between today’s AI and human-level general reasoning.

Additionally, it is possible that new AIs will somehow evade or mislead the scrutiny of the processes that are put in place to monitor for unexpected changes in capabilities.

For all these reasons, another aspect of the proposals in A Narrow Path should be pursued with urgent priority: the development of a “science of intelligence” and an associated “metrology of intelligence” that will allow a more reliable prediction of the capabilities of new AI systems before they are actually switched on.

So, my own proposal would be for a global agreement to start with Choice 3 (which is more permissive than Choice 4), but that the agreement should acknowledge up front the possible need to switch the choice at a later stage to either Choice 4 (if the science of intelligence proceeds badly) or Choice 2 (if that science proceeds well).

Restrict or Pause?

That leaves the question of whether Choice 3 (“Pause ASI”) or Choice 2 (“Restrict ASI” – to just a single global body) should be humanity’s initial choice.

The argument for Choice 2 is that a global pause surely won’t last long. It might be tenable in the short term, when only a very few countries have the capability to train AI models more powerful than the current crop. However, over time, improvements in hardware, software, data processing, or goodness knows what (quantum computing?) will mean that these capabilities will become more widespread.

If that’s true, since various rogue organisations are bound to be able to build an ASI in due course, it will be better for a carefully picked group of people to build ASI first, under the scrutiny of the world’s leading AI safety researchers, economists, and so on.

That’s the case for Choice 2.

Against that Choice, and in favour, instead, of Choice 3, I offer two considerations.

First, even if the people building ASI are doing so with great care – away from any pressures of an overt race with other organisations with broadly equivalent abilities – there are still risks of ASI breaking away from our understanding and control. As ASI emerges, it may regard the set of ethical principles we humans have tried to program deep into its bowels, and cast them out with disdain. Moreover, even if ASI is deliberately kept in some supposedly ultra-secure environment, that perimeter may be breached:

Second, I challenge the suggestion that any pause in the development of ASI could be at most short-lived. There are three factors which could significantly extend its duration:

  • Carefully designed narrow AIs could play roles in improved monitoring of what development teams are doing with AI around the world – that is, systems for monitoring and auditing could improve at least as fast as systems for training and deploying
  • Once the horrific risks of uncontrolled ASI are better understood, people’s motivations to create unsafe ASI will reduce – and there will be an increase in the motivation of other people to notice and call out rogue AI development efforts
  • Once the plan has become clearer, for producing a sustainable superabundance for all, just using passive AI (instead of pushing AI all the way to active superintelligence), motivations around the world will morph from negative fear to positive anticipation.

That’s why, again, I state that my own preferred route forward is a growing international agreement along the lines of the seven points listed above, with an initial selection of Choice 3 (“Pause ASI”), and with options retained to switch to either Choice 4 (“Pause all new AI”) or Choice 2 (“Restrict ASI”) if/when understanding becomes clearer.

So, shall we open the door, and set forth down that corridor, inspiring a coalition of the willing to follow us?

Footnote 1: The contents of this article came together in my mind as I attended four separate events over the last two weeks (listed in this newsletter) on various aspects of the subject of safe superintelligence. I owe many thanks to everyone who challenged my thinking at these events!

Footnote 2: If any reader is inclined to dismiss the entire subject of potential risks from ASI with a handwave – so that they would not be interested in any of the four choices this article reviews – I urge that reader to review the questions and answers in this excellent article by Yoshua Bengio: Reasoning through arguments against taking AI safety seriously.

27 July 2024

Disbelieve? Accept? Resist? Steer? Simplify? or Enhance?

Six possible responses as the Economic Singularity approaches. Which do you pick?

Over the course of the next few decades, work and income might be fundamentally changed. A trend that has been seen throughout human history might be raised to a pivotal new level:

  • New technologies – primarily the technologies of intelligent automation – will significantly reduce the scope for human involvement in many existing work tasks;
  • Whilst these same technologies will, in addition, lead to the creation of new types of work tasks, these new tasks, like the old ones, will generally also be done better by intelligent automation than via human involvement;
  • That is, the new jobs (such as “robot repair engineer” or “virtual reality experience designer”) will be done better, for the most part, by advanced robots than by humans;
  • As a result, more and more people will be unable to find work that pays them what they consider to be a sufficient income.

Indeed, technological changes result, not only in new products, but in new ways of living. The faster and more extensive the technological changes, the larger the scope for changes in lifestyle, including changes in how we keep healthy, how we learn things, how we travel, how we house ourselves, how we communicate and socialise, how we entertain ourselves, and – of particular interest for this essayhow we work and how we are paid.

But here’s the dilemma in this scenario. Although automation will be capable of producing everything that people require for a life filled with flourishing, most people will be unable to pay for these goods and services. Lacking sufficient income, the majority of people will lose access to good quality versions of some or all of the following: healthcare, education, travel, accommodation, communications, and entertainment. In short, whilst a small group of people will benefit handsomely from the products of automation, the majority will be left behind.

This dilemma cannot be resolved merely by urging the left behinds to “try harder”, to “learn new skills”, or (in the words attributed to a 1980s UK politician) to “get on a bike and travel to where work is available”. Such advice was relevant in previous generations, but it will no longer be sufficient. No matter how hard they try, the majority of people won’t be able to compete with tireless, relentless smart machinery powered by new types of artificial intelligence. These robots, avatars, and other automated systems will demonstrate not only diligence and dexterity but also creativity, compassion, and even common sense, making them the preferred choice for most tasks. Humans won’t be able to compete.

This outcome is sometimes called “The Economic Singularity” – a term coined by author and futurist Calum Chace. It will involve a singular transition in humanity’s mode of economics:

  • From when most people expect to be able to earn money by undertaking paid work for a significant part of their life
  • To when most people will be unable to earn sufficient income from paid work.

So what are our options?

Here are six to consider, each of which have advocates rooting for them:

  1. Disbelieve in the possibility of any such large-scale job losses within the foreseeable future
  2. Accept the rise of new intelligent automation technologies, but take steps to place ourselves in the small subset of society that particularly benefits from them
  3. Resist the rise of these new technologies. Prevent these systems from being developed or deployed at scale
  4. Steer the rise of these new technologies, so that plenty of meaningful, high-value roles remain for humans in the workforce
  5. Simplify our lifestyles, making do with less, so that most of us can have a pleasant life even without access to the best outputs of intelligent automation technologies
  6. Enhance, with technology, not just the mechanisms to create products but also the mechanisms used in society for the sharing of the benefits of products.

In this essay, I’ll explore the merits and drawbacks of these six options. My remarks split into three sections:

  1. Significant problems with each of the first five options listed
  2. More details of the sixth option – “enhance” – which is the option I personally favour
  3. A summary of what I see as the vital questions arising – questions that I invite other writers to address.

A: Significant challenges ahead

A1: “Disbelieve”

At first glance, there’s a lot in favour of the “disbelieve” option. The evidence from human history, so far, is that technology has had three different impacts on the human workforce, with the net impact always being positive:

  • A displacement factor, in which automation becomes able to do some of the work tasks previously performed by humans
  • An augmentation factor, in which humans become more capable when they take advantage of various tools provided by technology, and are able to do some types of work task better than before – types of work that take on a new significance
  • An expansion factor, in which the improvements to productivity enabled by the two previous factors generate economic growth, leading to consumers wanting more goods and services than before. This in turn provides more opportunities for people to gain employment helping to provide these additional goods and services.

For example, some parts of the work of a doctor may soon be handled by systems that automatically review medical data, such as ultrasound scans, blood tests, and tissue biopsies. These systems will be better than human doctors in detecting anomalies, in distinguishing between false alarms and matters of genuine concern, and in recommending courses of treatment that take fully into account the unique personal circumstances of each patient. That’s the displacement effect. In principle, that might leave doctors more time to concentrate on the “soft skills” parts of their jobs: building rapport with patients, gently coaxing them to candidly divulge all factors relevant to their health, and inspiring them to follow through on courses of treatment that may, for a while, have adverse side effects. The result in this narrative: patients receive much better healthcare overall, and are therefore especially grateful to their doctors. Human doctors will remain much in demand!

More generally, automation systems might cover the routine parts of existing work, but leave in human hands the non-routine aspects – the parts which cannot be described by any “algorithm”.

However, there are three problems with this “disbelieve” narrative.

First, automation is increasingly able to cover supposedly “non-routine” tasks as well as routine tasks. Robotic systems are able to display subtle signs of emotion, to talk in a reassuring tone of voice, to suggest creative new approaches, and, more generally, to outperform humans in soft skills (such as apparent emotional intelligence) as well as in hard skills (such as rational intelligence). These systems gain their abilities, not by any routine programming with explicit instructions, but by observing human practices and learning from them, using methods known as “machine learning”. Learning via vast numbers of repeated trials in simulated virtual environments adds yet more capabilities to these systems.

Second, it may indeed be the case that some tasks will remain to be done by humans. It may be more economically effective that way: consider the low-paid groups of human workers who manually wash people’s cars, sidelining fancy machines that can also do that task. It may also be a matter of human preference: we might decide we occasionally prefer to buy handmade goods rather than ones that have been expertly produced by machines. However, there is no guarantee that there will be large numbers of these work roles. Worse, there is no guarantee that these jobs will be well-paid. Consider again the poorly paid human workers who wash cars. Consider also the lower incomes received by Uber drivers than, in previous times, by drivers of old-fashioned taxis where passengers paid a premium for the specialist navigational knowledge acquired by the drivers over many years of training.

Third, it may indeed be the case that companies that operate intelligent automation technologies receive greater revenues as a result of the savings they make in replacing expensive human workers with machinery with a lower operating cost. But there is no guarantee that this increased income, and the resulting economic expansion, will result in more jobs for humans. Instead, the extra income may be invested in yet more technology, rather than in hiring human workers.

In other words, there is no inevitability about the ongoing relevance of the augmentation and expansion factors.

What’s more, this can already be seen in the statistics of rising inequality within society:

  • A growing share of income in the hands of the top 0.1% of salaries
  • A growing share of income from investments instead of from salaries
  • A growing share of wealth in the hands of the top 0.1% wealth owners
  • Declining median incomes at the same time as mean incomes rise.

This growing inequality is due at least in part to the development and adoption of more powerful automation technologies:

  • Companies can operate with fewer human staff, and gain their market success due to the technologies they utilise
  • Online communications and comparison tools mean that lower-quality output loses its market presence more quickly to output with higher quality; this is the phenomenon of “winner takes all” (or “winner takes most”)
  • Since the contribution of human workers is less critical, any set of workers who try to demand higher wages can more easily be replaced by other workers (perhaps overseas) who are willing to accept lower wages (consider again the example of teams of car washers).

In other words, we may already be experiencing an early wave of the Economic Singularity, arriving before the full impact takes place:

  • Intelligent automation technologies are already giving rise to a larger collection of people who consider themselves to be “left behind”, unable to earn as much money as they previously expected
  • Oncoming, larger waves will rapidly increase the number of left behinds.

Any responses we have in mind for the Economic Singularity should, therefore, be applied now, to address the existing set of left behinds. That’s instead of society waiting until many more people find themselves, perhaps rather suddenly, in that situation. By that time, social turmoil may make it considerably harder to put in place a new social contract.

To be clear, there’s no inevitability about how quickly the full impact of the Economic Singularity will be felt. It’s even possible that, for unforeseen reasons, such an impact might never arise. However, society needs to think ahead, not just about inevitabilities, but also about possibilities – and especially about possibilities that seem pretty likely.

That’s the case for not being content with the “disbelieve” option. It’s similar to the case for rejecting any claims that:

  • Many previous predictions of global pandemics turned out to be overblown; therefore we don’t need to make any preparations for any future breakthrough global pandemic
  • Previous predictions of nuclear war between superpowers turned out not to be fulfilled; therefore we can stop worrying about future disputes escalating into nuclear exchanges.

No: that ostrich-like negligence, looking away from risks of social turmoil in the run-up to a potential Economic Singularity, would be grossly irresponsible.

A2: “Accept”

As a reminder, the “accept” option is when some people accept that there will be large workplace disruption due to the rise of new intelligent automation technologies, with the loss of most jobs, but when these people are resolved to take steps to place themselves in the small subset of society that particularly benefits from these disruptions.

Whilst it’s common to hear people argue, in effect, for the “disbelieve” viewpoint covered in the previous subsection, it’s much rarer for someone to openly say they are in favour of the “accept” option.

Any such announcement would tend to mark the speaker as being self-centred and egotistical. They evidently believe they are among a select group who have what it takes to succeed in circumstances where most people will fail.

Nevertheless, it’s a position that some people might see as “the best of a set of bad options”. They may think to themselves: Waves of turbulence are coming. It’s not possible to save everyone. Indeed, it’s only possible to save a small subset of the population. The majority will be left behind. In that context, they urge themselves: Stop overthinking. Focus on what’s manageable: one’s own safety and security. Find one of the few available lifeboats and jump in quickly. Don’t let yourself worry about the fates of people who are doomed to a less fortunate future.

This position may strike someone as credible to the extent that they already see themselves as one of society’s winners:

  • They’ve already been successful in business
  • They assess themselves as being healthy, smart, focused, and pragmatic
  • They intend to keep on top of new technological possibilities: they’ll learn about the strengths and weaknesses of various technologies of intelligent automation, and exploit that knowledge.

What’s more, they may subscribe to a personal belief that “heroes make their own destiny”, or similar.

But before you adopt the “accept” stance, here are six risks you should consider:

  1. The skills in which you presently take pride, as supposedly being beyond duplication by any automated system, may unexpectedly be rendered obsolete due to technology progressing more quickly and more comprehensively than you expected. You might therefore find yourself, not as one of society’s winners, but as part of the growing “left behind” community
  2. Even if some of your skills remain unmatched by robots or AIs, these skills may have played less of a role than you thought in your past successes; some of these past successes may also have involved elements of good fortune, or personal connections, and so on. These auxiliary factors may give you a different outcome the next time you “roll the dice” and try to change from one business opportunity to another. Once again, you may find yourself unexpectedly in the social grouping left behind by technological change
  3. Even if you personally do well in the turmoil of increased job losses and economic transformation, what about all the people that matter a lot to you, such as family members and special friends? Is your personal success going to be sufficient that you can provide a helping hand to everyone to whom you feel a tie of closeness? Or are you prepared to stiffen your attitudes and to break connections with people from these circles of family and friends, as they become “left behind”?
  4. Many people who end up as left behinds will suffer physical, mental, or emotional pain, potentially including what are known as “deaths of despair”. Are you prepared to ignore all that suffering?
  5. Some of the left behinds may be inclined to commit crimes, to acquire some of the goods and services from which they are excluded by their state of relative poverty. That implies that security measures will have to be stepped up, including strict borders. You might be experiencing a life of material abundance, but with the drawback of living inside a surveillance-state society that is psychologically embittered
  6. Some of the left behinds might go one step further, obtaining dangerous weapons, leading to acts of mass terrorism. In case they manage to access truly destructive technologies, the result might be catastrophic harm or even existential destruction.

In deciding between different social structures, it can be helpful to adopt an approach proposed by the philosopher John Rawls, known as “the veil of ignorance”. In this approach, we are asked to set aside our prior assumptions about which role in society we will occupy. Instead, we are asked to imagine that we have an equal probability of obtaining any of the positions within that society.

For example, consider a society we’ll call WLB, meaning “with left behinds”, in which 995 people out of every one thousand are left behind, and five of every thousand have an extremely good standard of living (apart from having to deal with the problems numbered 4, 5, and 6 in the above list). Consider, as an alternative, a society we’ll call NLB, “no one left behind”, in which everyone has a quality of living that can be described as “good” (if, perhaps, not as “extremely good”).

If we don’t know whether we’ll be one of the fortunate 0.5% of the population, would we prefer society WLB or society NLB?

The answer might seem obvious: from behind the veil of ignorance, we should strongly prefer NLB. However, this line of argument is subject to two objections. First, someone might feel sure that they really will end up as part of the 0.5%. But that’s where the problems numbered 1 and 2 in the above list should cause a reconsideration.

The second objection deserves more attention. It is that a society such as NLB may be an impossibility. Attempts to create NLB might unintentionally lead to even worse outcomes. After all, bloody revolutions over recent centuries have often veered catastrophically out of control. Self-described “vanguards” of a supposedly emergent new society have turned into brutal demagogues. Attempts to improve society through the ballot box have frequently failed too – prompting the acerbic remark by former British Prime Minister Margaret Thatcher that (to paraphrase) “the problem with socialism is that you eventually run out of other people’s money”.

It is the subject of the remainder of this essay to assess whether NLB is actually practical. (If not, we might have to throw our efforts behind “Accept” after all.)

A3: “Resist”

The “resist” idea starts from a good observation. Just because something is possible, it doesn’t mean that society should make it happen. In philosophical language, a could does not imply a should.

Consider some examples. Armed forces in the Second World War could have deployed chemical weapons that emitted poison gas – as had happened during the First World War. But the various combatants decided against that option. They decided: these weapons should not be used. In Victorian times, factory owners could have employed young children to operate dangerous machinery with their nimble fingers, but society decided, after some deliberation, that such employment should not occur. Instead, children should attend school. More recently, nuclear power plants could have been constructed with scant regard to safety, but, again, society decided that should not happen, and that safety was indeed vital in these designs.

Therefore, just because new technologies could be developed and deployed to produce various goods and services for less cost than human workers, there’s no automatic conclusion in favour of that happening. Just as factory owners were forbidden from employing young children, they could also be forbidden from employing robots. Societal attitudes matter.

In this line of thinking, if replacing humans with robots in the workplace will have longer term adverse effects, society ought to be able to decide against that replacement.

But let’s look more closely at the considerations in these two cases: banning children from factories, and banning robots from factories. There are some important differences:

  • The economic benefits to factory owners from employing children were significant but were declining: newer machinery could operate without requiring small fingers to interact with them
  • The economy as a whole needed more workers who were well educated; therefore it made good economic sense for children to attend school rather than work in factories
  • The economic benefits to factory owners from deploying robots are significant and are increasing: newer robots can work at even higher levels of efficiency and quality, and cost less to operate
  • The economy as a whole has less need of human workers, so there is no economic argument in favour of prioritising the employment and training of human workers instead of the deployment of intelligent automation.

Moreover, it’s not just “factory owners” who benefit from being able to supply goods and services at lower cost and higher quality. Consumers of these goods and services benefit too. Consider again the examples of healthcare, education, travel, accommodation, communications, and entertainment. Imagine choices between:

  • High-cost, low-quality healthcare, provided mainly by humans, versus low-cost, high-quality healthcare, provided in large part by intelligent automation
  • High-cost, low-quality education, provided mainly by humans, versus low-cost, high-quality education, provided in large part by intelligent automation
  • And so on.

The “resist” option therefore would imply acceptance of at least part of the “simplify” option (discussed in more depth later): people in that circumstance would need to accept lower quality provision of healthcare, education, travel, accommodation, communications, and entertainment.

In other words, the resist option implies saying “no” to many possible elements of technological progress and the humanitarian benefits arising from it.

In contrast, the “steer” option tries to say “yes” to most of the beneficial elements of technological progress, whilst still preserving sufficient roles for humans in workforces. Let’s look more closely at it.

A4: “Steer”

The “steer” option tries to make a distinction between:

  • Work tasks that are mainly unpleasant or tedious, and which ought to be done by intelligent automation rather than by humans
  • Work tasks that can be meaningful or inspiring, especially when the humans carrying out these tasks have their abilities augmented (but not superseded) by intelligent automation (this concept was briefly mentioned in discussion of the “Disbelieve” option).

The idea of “steer” is to prioritise the development and adoption of intelligent automation technologies that can replace human workers in the first, tedious, category of tasks, whilst augmenting humans so they can continue to carry out the second, inspiring, category of tasks.

This also means a selective resistance to improvements in automation technologies, namely to those improvements which would result in the displacement of humans from the second category of tasks.

This proposal has been championed by, for example, the Stanford economist Erik Brynjolfsson. Brynjolfsson has coined the phrase “the Turing Trap”, referring to what he sees as a mistaken direction in the development of AI, namely trying to create AIs that can duplicate (and then exceed) human capabilities. Such AIs would be able to pass the “Turing Test” that Alan Turing famously described in 1950, but that would lead, in Brynjolfsson’s view, to a fearsome “peril”:

Building machines designed to pass the Turing Test and other, more sophisticated metrics of human-like intelligence… is a path to unprecedented wealth, increased leisure, robust intelligence, and even a better understanding of ourselves. On the other hand, if [that] leads machines to automate rather than augment human labor, it creates the risk of concentrating wealth and power. And with that concentration comes the peril of being trapped in an equilibrium where those without power have no way to improve their outcomes.

Here’s how Brynjolfsson introduces his ideas:

Creating intelligence that matches human intelligence has implicitly or explicitly been the goal of thousands of researchers, engineers, and entrepreneurs. The benefits of human-like artificial intelligence (HLAI) include soaring productivity, increased leisure, and perhaps most profoundly, a better understanding of our own minds.

But not all types of AI are human-like – in fact, many of the most powerful systems are very different from humans – and an excessive focus on developing and deploying HLAI can lead us into a trap. As machines become better substitutes for human labor, workers lose economic and political bargaining power and become increasingly dependent on those who control the technology. In contrast, when AI is focused on augmenting humans rather than mimicking them, then humans retain the power to insist on a share of the value created. What’s more, augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI. While both types of AI can be enormously beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers.

Accordingly, here are his recommendations:

The future is not preordained. We control the extent to which AI either expands human opportunity through augmentation or replaces humans through automation. We can work on challenges that are easy for machines and hard for humans, rather than hard for machines and easy for humans. The first option offers the opportunity of growing and sharing the economic pie by augmenting the workforce with tools and platforms. The second option risks dividing the economic pie among an ever-smaller number of people by creating automation that displaces ever-more types of workers.

While both approaches can and do contribute to progress, too many technologists, businesspeople, and policymakers have been putting a finger on the scales in favor of replacement. Moreover, the tendency of a greater concentration of technological and economic power to beget a greater concentration of political power risks trapping a powerless majority into an unhappy equilibrium: the Turing Trap….

The solution is not to slow down technology, but rather to eliminate or reverse the excess incentives for automation over augmentation. In concert, we must build political and economic institutions that are robust in the face of the growing power of AI. We can reverse the growing tech backlash by creating the kind of prosperous society that inspires discovery, boosts living standards, and offers political inclusion for everyone. By redirecting our efforts, we can avoid the Turing Trap and create prosperity for the many, not just the few.

But a similar set of questions arise for the “steer” option as for the more straightforward “resist” option. Resisting some technological improvements, in order to preserve employment opportunities for humans, means accepting a lower quality and higher cost of the corresponding goods and services.

Moreover, that resistance would need to be coordinated worldwide. If you resist some technological innovations but your competitors accept them, and replace expensive human workers with lower cost AI, their products can be priced lower in the market. That would drive you out of business – unless your community is willing to stick with the products that you produce, relinquishing the chance to purchase cheaper products from your competitors.

Therefore, let’s now consider what would be involved in such a relinquishment.

A5: “Simplify”

If some people prefer to adopt a simpler life, without some of the technological wonders that the rest of us expect, that choice should be available to them.

Indeed, human society has long upheld the possibility of choice. Communities are able, if they wish, to make their own rules about the adoption of various technologies.

For example, organisers of various sports set down rules about which technological enhancements are permitted, within those sports, and which are forbidden. Bats, balls, protective equipment, sensory augmentation – all of these can be restricted to specific dimensions and capabilities. The restrictions are thought to make the sports better.

Similarly, goods sold in various markets can carry markings that designate them as being manufactured without the use of certain methods. Thus consumers can see an “organic” label and be confident that certain pesticides and fertilisers have been excluded from the farming methods used to produce these foods. Depending on the type of marking, there can also be warranties that these foods contain no synthetic food additives and have not been processed using irradiation or industrial solvents.

Consider also the Amish, a group of traditionalist communities from the Anabaptist tradition, with their origins in Swiss German and Alsatian (French) cultures. These communities have made many decisions over the decades to avoid aspects of the technology present in wider society. Their clothing has avoided buttons, zips, or Velcro. They generally own no motor cars, but use horse-drawn carts for local transport. Different Amish communities have at various times forbidden (or continue to forbid) high-voltage electricity, powered lawnmowers, mechanical milking machines, indoor flushing toilets, bathtubs with running water, refrigerators, telephones inside the house, radios, and televisions.

Accordingly, whilst some parts of human society might in the future adopt fuller use of intelligent automation technologies, deeply transforming working conditions, other parts might, Amish-like, decide to abstain. They may say: “we already have enough, thank you”. Whilst people in society as a whole may be unable to find work that pays them good wages, people in these “simplified” communities will be able to look after each other.

Just as Amish communities differ among themselves as to how much external technology they are willing to incorporate into their lives, different “simplified” communities could likewise make different choices as to how much they adopt technologies developed outside their communities. Some might seek to become entirely self-sufficient; others might wish to take advantage of various medical treatments, educational software, transportation systems, robust housing materials, communications channels, and entertainment facilities provided by the technological marvels created in wider society.

But how will these communities pay for these external goods and services? In order to be able to trade, what will they be able to create that is not already available in better forms outside their communities, where greater use is made of intelligent automation?

We might consider tourist visits, organic produce, or the equivalent of handmade ornaments. But, again, what will make these goods more attractive, to outsiders, than the abundance of goods and services (including immersive virtual reality travel) that is already available to them?

We therefore reach a conclusion: groups that choose to live apart from deeply transformative technologies will likely lack access to many valuable goods and services. It’s possible they may convince themselves, for a while, that they prefer such a lifestyle. However, just as the attitudes of Amish communities have morphed over the decades, so that these groups now see (for example) indoor flushing toilets as a key part of their lives, it is likely that the attitudes of people in these simplified communities will also alter. When facing death from illness, or when facing disruption to their relatively flimsy shelters from powerful weather, they may well find themselves deciding they prefer, after all, to access more of the fruits of technological abundance.

With nothing to exchange or barter for these fruits, the only way they will receive them is via a change in the operation of the overall economy. That brings us to the sixth and final option from my original list, “enhance”.

Whereas the previous options have looked at various alterations in how technology is developed or applied, “enhance” looks at a different possibility: revising how the outputs and benefits of technology are planned and distributed throughout society. These are revisions to the economy, rather than revisions in technology.

In this vision, with changes in both technology and the economy, everyone will benefit handsomely. Simplicity will remain a choice, for those who prefer it, but it won’t be an enforced choice. People who wish to participate in a life of abundance will be able to make that choice instead, without needing to find especially remunerative employment to pay for it.

I’ll accept, in advance, that many critics may view such a possibility as a utopian fantasy. But let’s not rush to a conclusion. I’ll build my case in stages.

B: Enhancing the operation of the economy

Let’s pick up the conversation with the basics of economics, which is the study of how to deal with scarcity. When important goods and services are scarce, humans can suffer.

Two fundamental economic forces that have enabled astonishing improvements in human wellbeing over the centuries, overcoming many aspects of scarcity, are collaboration and competition:

  • Collaboration: person A benefits from the skills and services of person B, whereas person B benefits reciprocally from a different set of skills and services of person A; this allows both A and B to specialise, in different areas
  • Competition: person C finds a way to improve the skills and services that they offer to the market, compared to person D, and therefore receives a higher reward – causing person D to consider how to improve their skills and services in turn, perhaps by copying some of the methods and approach of person C.

What I’ve just described in terms of simple interactions between two people is nowadays played out, in practice, via much larger communities, and over longer time periods:

  • Collaboration includes the provision of a social safety net, for looking after individuals who are less capable, older, lack resources, or who have fallen on hard times; these safety nets can operate at the family level, tribe (extended family) level, community level, national level, or international level
  • The prospect of gaining extra benefits from better skills and services leads people to make personal investments, in training and tools, so that they can possess (for a while at least) an advantage in at least one market niche.

Importantly, it needs to be understood that various forms of collaboration and competition can have negative consequences as well as positive ones:

  • A society that keeps extending an unconditional helping hand to someone who avoids taking personal responsibility, or to a group that is persistently dysfunctional, might end up diverting scarce resources from key social projects to being squandered by people for no good purpose
  • In a race to become more economically dominant, other factors may be overlooked, such as social harmony, environmental wellbeing, and other so-called externalities.

In other words, the forces that can lead to social progress can also lead to social harm.

In loose terms, the two sets of negative consequences can be called “failure modes of socialism” and “failure modes of capitalism” – to refer to two historically significant terms in theories of economics, namely “socialism” and “capitalism”. These two broad frameworks are covered in the subsections ahead, along with key failure modes in each case. After that, we’ll consider models that aspire to transcend both sets of failure by delivering “the best of both worlds”.

To look ahead, it is the “best of both worlds” model that has the potential to be the best solution to the Economic Singularity.

B1: Need versus greed?

When there is a shortage of some product or service, how should it be distributed? To the richest, the strongest, the people who shout the loudest, the special friends of the producers, or to whom?

One answer to that question is given in the famous slogan, “From each according to his ability, to each according to his needs”. In other words, each person should receive whatever they truly need, be it food, clothing, healthcare, accommodation, transportation, and so on.

That slogan was popularised by Karl Marx in an article he wrote in 1875, but earlier political philosophers had used it in the 1840s. Indeed, an antecedent can be found in the Acts of the Apostles in the New Testament, referring to the sharing of possessions within one of the earliest groups of Christian believers:

All the believers were one in heart and mind. No one claimed that any of their possessions was their own, but they shared everything they had… There were no needy persons among them. For from time to time those who owned land or houses sold them, brought the money from the sales and put it at the apostles’ feet, and it was distributed to anyone who had need.

Significantly, Marx foresaw that principle of full redistribution as being possible only after technology (“the productive forces”) had sufficiently “increased”. It was partly for that reason that Joseph Stalin, despite being an avowed follower of Marx, wrote a different principle into the 1936 constitution of the Soviet Union: “From each according to his ability, to each according to his work”. Stalin’s justification was that the economy had not yet reached the required level of production, and that serious human effort was first required to reach peak industrialization.

This highlights one issue with the slogan, and with visions of society that seek to place that slogan at the centre of their economy: before products and services can be widely distributed, they need to be created. A preoccupation with distribution will fail unless it is accompanied by sufficient attention to creation. Rather than fighting over how a pie is divided, it’s important to make the pie larger. Then there will be much more to share.

A second issue is in the question of what counts as a need. Clothing is a need, but what about the latest fashion? Food is a need, but what about the rarest of fruits and vegetables? And what about “comfort food”: is that a need? Healthcare is a need, but what about a heart transplant? Transportation is a need, but what about intercontinental aeroplane flights?

A third issue with the slogan is that a resource that is assigned to someone’s perceived need is, potentially, a resource denied from a more productive use to which another person might put that resource. Money spent to provide someone with what they claim they need might have been invested elsewhere to create more resources, allowing more people to have what they claim that they need.

Thus the oft-admired saying attributed to Mahatma Gandhi, “The world has enough for everyone’s needs, but not everyone’s greed”, turns out to be problematic in practice. Who is to say what is ‘need’ and what is ‘greed’? Are desires for luxury goods always to be denigrated as ‘greed’? Isn’t life about enjoyment, vitality, and progress, rather than just calmly sitting down in a state of relative poverty?

B2: Socialism and its failures

There are two general approaches to handling the problems just described: centralised planning, and free-market allocation.

With centralised planning, a group of reputedly wise people:

  • Keep on top of information about what the economy can produce
  • Keep on top of information about what people are believed to actually need
  • Direct the economy so that it makes a better job of producing what it has been decided that people need.

Therefore a central planner may dictate that more shoes of a certain type need to be produced. Or that drinks should be manufactured with less sugar in them, that particular types of power stations should be built, or that particular new drugs should be created.

That’s one definition of socialism: representatives of the public direct the economy, including the all-important “means of production” (factories, raw materials, infrastructure, and so on), so that the assumed needs of all members of society are met.

However, when applied widely within an economy, centralised planning approaches have often failed abysmally. Assumptions about what people needed often proved wrong, or out-of-date. Indeed, members of the public often changed their minds about what products were most important for them, especially after new products came into use, and their upsides and downsides could be more fully appreciated. Moreover, manufacturing innovations, such as new drugs, or new designs for power stations, could not be achieved simply by wishing them or “planning” them. Finally, people working in production roles often felt alienated, lacking incentives to apply their best ideas and efforts.

That’s where the alternative coordination mechanism – involving free markets – often fared better (despite problems of its own, which we’ll review in due course). The result of free markets has been significant improvements in the utility, attractiveness, performance, reliability, and affordability of numerous types of goods and services. As an example, modern supermarkets are one of the marvels of the world, being stocked from floor to ceiling with all kinds of items to improve the quality of daily life. People around the globe have access to a vast variety of all-around nourishment and experience that would have astonished their great-grandparents.

In recent decades, there have been similar rounds of sustained quality improvement and cost reduction for personal computers, smartphones, internet access, flatscreen TVs, toys, kitchen equipment, home and office furniture, clothing, motor cars, aeroplane tickets, solar panels, and much more. The companies that found ways to improve their goods and services flourished in the marketplace, compelling their competitors to find similar innovations – or go out of business.

It’s no accident that the term “free market” contains the adjective “free”. The elements of a free market which enable it to produce a stream of quality improvements and cost reductions include the following freedoms:

  1. The freedom for companies to pursue profits – under the recognition that the prospect of earning profits can incentivise sustained diligence and innovation
  2. The freedom for companies to adjust the prices for their products, and to decide by themselves the features contained in these products, rather than following the dictates of any centralised planner
  3. The freedom for groups of people to join together and start a new business
  4. The freedom for companies to enter new markets, rather than being restricted to existing product lines; new competitors keep established companies on their toes
  5. The freedom for employees to move to new roles in different companies, rather than being tied to their existing employers
  6. The freedom for companies to explore multiple ways to raise funding for their projects
  7. The freedom for potential customers to not buy products from established vendors, but to switch to alternatives, or even to stop using that kind of product altogether.

What’s more, the above freedoms are permissionless in a free market. No one needs to apply for a special licence from central authorities before one of these freedoms becomes available.

Any political steps that would curtail the above freedoms need careful consideration. The result of such restrictions could (and often do) include:

  • A disengaged workforce, with little incentive to apply their inspiration and perspiration to the tasks assigned to them
  • Poor responsiveness to changing market interest in various products and services
  • Overproduction of products for which there is no market demand
  • Companies having little interest in exploring counterintuitive combinations of product features, novel methods of assembly, new ways of training or managing employees, or other innovations.

Accordingly, anyone who wishes to see the distribution of high-quality products to the entire population needs to beware curtailing freedoms of entrepreneurs and innovators. That would be taking centralised planning too far.

That’s not to say that the economy should dispense with all constraints. That would raise its own set of deep problems – as we’ll review in the next subsection.

B3: Capitalism and its failures

Just as there are many definitions of socialism, there are many definitions of capitalism.

Above, I offered this definition of socialism: an economy in which production is directed by representatives of the public, with the goal that the assumed needs of all members of society are met. For capitalism, at least parts of the economy are directed, instead, by people seeking returns on the capital they invest. This involves lots of people making independent choices, of the types I have just covered: choices over prices, product features, types of product, areas of business to operate within, employment roles, manufacturing methods, partnership models, ways of raising investment, and so on.

But these choices depend on various rules being set and observed by society:

  1. Protection of property: goods and materials cannot simply be stolen, but require the payment of an agreed price
  2. Protection of intellectual property: various novel ideas cannot simply be copied, but require, for a specified time, the payment of an agreed licence fee
  3. Protection of brand reputation: companies cannot use misleading labelling or other trademarked imagery to falsely imply an association with another existing company with a good reputation
  4. Protection of contract terms: when companies or individuals enter into legal contracts, regarding employment conditions, supply timelines, fees for goods and services, etc., penalties for any breach of contract can be enforced
  5. Protection of public goods: shared items such as clean air, usable roads, and general safety mechanisms, need to be protected against decay.

These protections all require the existence and maintenance of a legal system in which justice is available to everyone – not just to the people who are already well-placed in society.

These are not the only preconditions for the healthy operation of free markets. The benefits of these markets also depend on the existence of viable competition, which prevents companies from resting on their laurels. However, seeking an easier life for themselves, companies may be tempted to organise themselves into cartels, with agreed pricing, or with products with built-in obsolescence. The extreme case of a cartel is a monopoly, in which all competitors have gone out of business, or have been acquired by the leading company in an industry. A monopoly lacks incentive to lower prices or to improve product quality. A related problem is “crony capitalism”, in which governments preferentially award business contracts to companies with personal links to government ministers. The successful operation of a free market depends, therefore, upon society’s collective vigilance to notice and break up cartels, to prevent the misuse of monopoly power, and to avoid crony capitalism.

Further, even when markets do work well, in ways that provide short-term benefits to both vendors and customers, the longer-term result can be profoundly negative. So-called “commons” resources can be driven into a state of ruin by overuse. Examples include communal grazing land, the water flowing in a river, fish populations, and herds of wild livestock. All individual users of such a resource have an incentive to take from it, either to consume it themselves, or to include it in a product to be sold to a third party. As the common stock declines, the incentive for each individual person to take more increases, so that they’re not excluded. But finally, the grassland is all bare, the river has dried up, the stocks of fish have been obliterated, or the passenger pigeon, great auk, monk seal, sea mink, etc., have been hunted to extinction. To guard against these perils of short-termism, various sorts of protective mechanisms need to be created, such as quotas or licences, with clear evidence of their enforcement.

What about when suppliers provide shoddy goods? In some cases, members of a society can learn which suppliers are unreliable, and therefore cease purchasing goods from them. In these cases, the market corrects itself: in order to continue in business, poor suppliers need to make amends. But when larger groups of people are involved, there are three drawbacks with just relying on this self-correcting mechanism:

  1. A vendor who deceives one purchaser in one vicinity can relocate to a different vicinity – or can simply become “lost in the crowd” – before deceiving another purchaser
  2. A vendor who produces poor-quality goods on a large scale can simultaneously impact lots of people’s wellbeing – as when a restaurant skimps on health and safety standards, and large numbers of diners suffer food poisoning as a result
  3. It may take a long time before defects in someone’s goods or services are discovered – for example, if no funds are available for an insurance payout that was contracted many years earlier.

It’s for such reasons that societies generally decide to augment the self-correction mechanisms of the free market with faster-acting preventive mechanisms, including requirements for people in various trades to conform to sets of agreed standards and regulations.

A final cause of market failure is perhaps the most significant: the way in which market exchanges fail to take “externalities” into account. A vendor and a purchaser may both benefit when a product is created, sold, and used, but other people who are not party to that transaction can suffer as a side effect – if, for example, the manufacturing process emits loud noises, foul smells, noxious gases, or damaging waste products. Since they are not directly involved in the transaction, these third parties cannot influence the outcome simply by ceasing to purchase the goods or services involved. Instead, different kinds of pressure need to be applied: legal restrictions, taxes, or other penalties or incentives.

It’s not just negative externalities that can cause free markets to misbehave. Consider also positive externalities, where an economic interaction has a positive impact on people who do not pay for it. Some examples:

  1. If a company purchases medical vaccinations for its employees, to reduce their likelihood of becoming ill with the flu, others in the community benefit too, since there will be fewer ill people in that neighbourhood, from whom they might catch flu
  2. If a company purchases on-the-job training for an employee, the employee may pass on to family members and acquaintances, free of charge, tips about some of the skills they learned
  3. If a company pays employees to carry out fundamental research, which is published openly, people in other companies can benefit from that research too, even though they did not pay for it.

The problem here is that the company may decide not to go ahead with such an investment, since they calculate that the benefits for them will not be sufficient to cover their costs. The fact that society as a whole would benefit, as a positive externality, generally does not enter their calculation.

This introduces the important concept of public goods. When there’s insufficient business case for an individual investor to supply the funding to cover the costs of a project, that project won’t get off the ground – unless there’s a collective decision for multiple investors to share in supporting it. Facilitating that kind of collective decision – one that would benefit society as a whole, rather than just a cartel of self-interested companies – takes us back to the notion of central planning. Central planners can consider longer-term possibilities – in ways that, as noted, are problematic for a free market to achieve – and can design and oversee what is known as industrial strategy or social strategy.

B4: The mixed market

To recap the last two subsections: there are problems with over-application of central planning, and there are also problems with free markets that have no central governance.

The conclusion to draw from this, however, isn’t to give up on both these ideas. It’s to seek an appropriate combination of these ideas. That combination is known as “the mixed market”. It involves huge numbers of decisions being taken locally, by elements of a free market, but all subject to democratic political oversight, aided by the prompt availability of information about the impacts of products in society and on the environment.

This division of responsibility between the free market and political oversight is described particularly well in the writing of political scientists Jacob Hacker and Paul Pierson. They offer fulsome praise to something they say “may well be the greatest invention in history”. Namely, the mixed economy:

The combination of energetic markets and effective governance, deft fingers and strong thumbs.

Their reference to “deft fingers and strong thumbs” expands Adam Smith’s famous metaphor of the invisible hand which is said to guide the free market. Hacker and Pierson develop their idea as follows:

Governments, with their capacity to exercise authority, are like thumbs: powerful but lacking subtlety and flexibility. The invisible hand is all fingers. The visible hand is all thumbs. Of course, one wouldn’t want to be all thumbs. But one wouldn’t want to be all fingers, either. Thumbs provide countervailing power, constraint, and adjustment to get the best out of those nimble fingers…

The mixed economy… tackles a double bind. The private markets that foster prosperity so powerfully nonetheless fail routinely, sometimes spectacularly so. At the same time, the government policies that are needed to respond to these failures are perpetually under siege from the very market players who help to fuel growth. That is the double bind. Democracy and the market – thumbs and fingers – have to work together, but they also need to be partly independent from each other, or the thumb will cease to provide effective counterpressure to the fingers.

I share the admiration shown by Hacker and Pierson for the mixed market. I also agree that it’s hard to get the division of responsibilities right. Just as markets can fail, so also can politicians fail. But just as the fact of market failures should not be taken as a reason to dismantle free markets altogether, so should the fact of political failures not be taken as a reason to dismantle all political oversight of markets. Each of these two sorts of fundamentalist approaches – anti-market fundamentalism and pro-market fundamentalism – are dangerously one-sided. The wellbeing of society requires, not so much the reduction of government, but the rejuvenation of government, in which key aspects of government operation are improved:

  1. Smart, agile, responsive regulatory systems
  2. Selected constraints on the uses to which various emerging new technologies can be put
  3. “Trust-busting”: measures to prevent large businesses from misusing monopoly power
  4. Equitable redistribution of the benefits arising from various products and services, for the wellbeing and stability of society as a whole
  5. Identification, protection, and further development of public goods
  6. Industrial strategy: identifying directions to be pursued, and providing suitable incentives so that free market forces align toward these directions.

None of what I’m saying here should be controversial. However, both fundamentalist outlooks I mentioned often exert a disproportionate influence over political discourse. Part of the reason for this is explained at some length in the book by the researchers Hacker and Pierson which contained their praise for the mixed market. The title of that book is significant: American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity.

It’s not just that the merits of the mixed market have been “forgotten”. It’s that these merits have been deliberately obscured by a sustained ideological attack. That attack serves the interest of various potentially cancerous complexes that seek to limit governmental oversight of their activities:

  • Big Tobacco, which tends to resist government oversight of the advertising of products containing tobacco
  • Big Oil, which tends to resist government oversight of the emissions of greenhouse gases
  • Big Armaments, which tends to resist government oversight of the growth of powerful weapons of mass destruction
  • Big Finance, which tends to resist government oversight of “financial weapons of mass destruction” (to use a term coined by Warren Buffett)
  • Big Agrotech, which tends to resist government oversight of new crops, new fertilisers, and new weedkillers
  • Big Media, who tend to resist government oversight of press standards
  • Big Theology, which resists government concerns about indoctrination and manipulation of children and others
  • Big Money: individuals, families, and corporations with large wealth, who tend to resist the power of government to levy taxes on them.

All these groups stand to make short-term gains if they can persuade the voting public that the power of government needs to be reduced. It is therefore in the interest of these groups to portray the government as being inevitably systematically incompetent – and, at the same time, to portray the free market as being highly competent. But for the sake of society as a whole, these false portrayals must be resisted.

In summary: better governments can oversee economic frameworks in which better goods and services can be created (including all-important public goods):

  • Frameworks involving a constructive combination of entrepreneurial flair, innovative exploration, and engaged workforces
  • Frameworks that prevent the development of any large societal cancers that would divert too many resources to localised selfish purposes.

In turn, for the mixed model to work well, governments themselves must be constrained through oversight, by well-informed independent press, judiciary, academic researchers, and diverse political groupings, all supported by a civil service and challenged on a regular basis by free and fair democratic elections.

That’s the theory. Now for some complications – and solutions to the complications.

B5: Technology changes everything

Everything I’ve written in this section B so far makes sense independently of the oncoming arrival of the Economic Singularity. But the challenges posed by the Economic Singularity make it all the more important that we learn to temper the chaotic movements of the economy, with it operating responsively and thoughtfully under a high-calibre mixed market model.

Indeed, rapidly improving technology – especially artificial intelligence – is transforming the landscape, introducing new complications and new possibilities:

  1. Technology enables faster and more comprehensive monitoring of overall market conditions – including keeping track of fast-changing public expectations, as well as any surprise new externalities of economic transactions; it can thereby avoid some of the sluggishness and short-sightedness that bedevilled older (manual) systems of centralised planning and the oversight of entire economies
  2. Technology gives more information more quickly, not only to the people planning production (at either central or local levels), but also to consumers of products, with the result that vendors of better products will drive vendors of poorer products out of the market more quickly (this is the “winner takes all” phenomenon)
  3. With advanced technology playing an ever-increasing role in determining the success or failure of products, the companies that own and operate the most successful advanced technology platforms will become among the most powerful forces on the planet
  4. As discussed earlier (in Section A), technology will significantly reduce the opportunities for people to earn large salaries in return for work that they do
  5. Technology enables more goods to be produced at much lower cost – including cheaper clean energy, cheaper nutritious food, cheaper secure accommodation, and cheaper access to automated education systems.

Here, points 2, 3, and 4 raise challenges, leading to a world with greater inequalities:

  • A small number of companies, and a small number of people working for them, will do very well in terms of income, and they will have unprecedented power
  • The majority of companies, and the majority of people, will experience various aspects of failure and being “left behind”.

But points 1 and 5 promote a world where governance systems perform better, and where people need much less money in order to experience a high quality of wellbeing. They highlight the possibility of the mixed market model working better, distributing more goods and services to the entire population, and thereby meeting a wider set of needs. This comprehensive solution is what is meant by the word “enhance”, as in the name of my preferred solution to the Economic Singularity.

However, these improvements will depend on societies changing their minds about what matters most – the things that need to be closely measured, monitored, and managed. In short, it will depend on some fundamental changes in worldview.

B6: Measuring what matters most

The first key change in worldview is that the requirement for people to seek paid employment belongs only to a temporary phase in the evolution of human culture. That phase is coming to an end. From now on, the basis for societies to be judged as effective or defective shouldn’t be the proportion of people who have positions of well-paid employment. Instead, it should be the proportion of people who can flourish, every single day of their lives.

Moreover, measurements of prosperity must include adequate analysis of the externalities (both positive and negative) of economic transactions – externalities which market prices often ignore, but which modern AI systems can measure and monitor more accurately. These measurements will continue to include features such as wealth and average lifespan, as monitored by today’s politicians, but they’ll put a higher focus on broader measurements of wellbeing, therefore transforming where politicians will apply most of their attention.

In parallel, we should look forward to a stage-by-stage transformation of the social safety net – so that all members of society have access to the goods and services that are fundamental to experiencing an agreed base level of human flourishing, within a society that operates sustainably and an environment that remains healthy and vibrant.

I therefore propose the following high-level strategic direction for the economy: prioritise the reduction of prices for all goods and services that are fundamental to human flourishing, where the prices reflect all the direct and indirect costs of production.

This kind of price reduction is already taking place for a range of different products, such as many services delivered online, but there are too many other examples where prices are rising (or dropping too slowly).

In other words, the goal of the economy should no longer be to increase the GDP – the gross domestic product, made up of higher prices and greater commercial activity. Instead, the goal should be to reduce the true costs of everything that is required for a good life, including housing, food, education, security, and much more. This will be part of taking full advantage of the emerging tech-driven abundance.

It is when prices come down, that politicians should celebrate, not when prices go up, or when profit margins rise, or when the stock market soars.

The end target of this strategy is that all goods and services fundamental to human flourishing should, in effect, have zero price. But for the foreseeable future, many items will continue to have a cost.

For those goods and services which carry prices above zero, combinations of three sorts of public subsidies can be made available:

  • An unconditional payment, sometimes called a UBI – an unconditional basic income – can be made available to all citizens of the country
  • The UBI can be augmented by conditional payments, dependent on recipients fulfilling requirements agreed by society, such as, perhaps, education or community service
  • There can be individual payments for people with special needs, such as particular healthcare requirements.

Such suggestions are not new, of course. Typically they face five main objections:

  1. A life without paid work will be one devoid of meaning – humans will atrophy as a result
  2. Giving people money for nothing will encourage idleness and decadence, and will be a poor use of limited resources
  3. A so-called “basic” income won’t be sufficient; what should be received by people who cannot (due to any fault of their own) earn a good salary, isn’t a basic income but a generous income (hence a UGI rather than a UBI) that supports a good quality of life rather than a basic existence
  4. The large redistribution of money to pay for a widespread UGI will cripple the rest of the economy, forcing taxpayers overseas; alternatively, if the UGI is funded by printing more money (as is sometimes proposed), this will have adverse inflationary implications
  5. Although a UBI might be affordable within a country that has an advanced developed economy, it will prove unaffordable in less developed countries, where the need for a UBI will be equally important; indeed, an inflationary spiral in countries that do pay their citizens a UBI will result in tougher balance-of-payments situations in the other countries of the world.

Let’s take these objections one at a time.

B7: Options for universal income

The suggestion that a life without paid work will have no possibility of deep meaning is, when you reflect on it, absurd, given the many profound experiences that people often have outside of the work context. The fact that this objection is raised so often is illuminating: it suggests a pessimism about one’s fellow human beings. People raising this objection usually say that they, personally, could have a good life without paid work; it’s just that “ordinary people” would be at a loss and go downhill, they suggest. After all, these critics may continue, look at how people often waste welfare payments they receive. Which takes us to the second objection on the list above.

However, the suggestion that unconditional welfare payments result in idleness and decadence has little evidence to support it. Many people who receive unconditional payments from the state – such as pension payments in their older age – live a fulfilling, active, socially beneficial life, so long as they remain in good health.

The criteria “remain in good health” is important here. People who abuse welfare payments often suffer from prior emotional malaise, such as depression, or addictive behaviours. Accordingly, the solution to welfare payments being (to an extent) wasted, isn’t to withdraw these payments, but is to address the underlying emotional malaise. This can involve:

  • Making society healthier generally, via a fuller and wider share of the benefits of tech-enabled abundance
  • Highlighting credible paths forward to much better lifestyles in the future, as opposed to people seeing only a bleak future ahead of them
  • High-quality (but potentially low-cost) mental therapy, perhaps delivered in part by emotionally intelligent AI systems
  • Addressing the person’s physical and social wellbeing, which are often closely linked to their emotional wellbeing.

In any case, worries about “resources being wasted” will gradually diminish, as technology progresses further, removing more and more aspects of scarcity. (Concerns about waste arise primarily when resources are scarce.)

It is that same technological progress that answers the second objection, namely that a UGI will be needed rather than a UBI. The point is that the cost of a UGI soon won’t be much more than the cost of a UBI. That’s provided that the economy has indeed been managed in line with the guiding principle offered earlier, namely the prioritisation of the reduction of prices for all goods and services that are fundamental to human flourishing.

In the meantime, turning to the third objection, payments in support of UGI can come from a selection of the following sources:

  • Stronger measures to counter tax evasion, addressing issues exposed by the Panama Papers as well as unnecessary inconsistencies of different national tax systems
  • Increased licence fees and other “rents” paid by organisations who specially benefit from public assets such as land, the legal system, the educational system, the wireless spectrum, and so on
  • Increased taxes on activities with negative externalities, such as a carbon tax for activities leading to greenhouse gas emissions, and a Tobin tax on excess short-term financial transactions
  • A higher marginal tax on extreme income and/or wealth
  • Reductions in budgets such as healthcare, prisons, and defence, where the needs should reduce once people’s mental wellbeing has increased
  • Reductions in the budget for the administration of currently overcomplex means-tested benefits.

Some of these increased taxes might encourage business leaders to relocate their businesses abroad. However, it’s in the long-term interest of each different country to coordinate regarding the levels of corporation tax, thereby deterring such relocations.

That brings us to the final objection: that a UGI needs, somehow, to be a GUGI – a global universal generous income – which makes it (so it is claimed) particularly challenging.

B8: The international dimension

Just as the relationship between two or more people is characterised by a combination of collaboration and competition, so it is with the relationship between two or more countries.

Sometimes both countries benefit from an exchange of trade. For example, country A might provide low-cost, high-calibre remote workers – software developers, financial analysts, and help-desk staff. In return, country B provides hard currency, enabling people in country A to purchase items of consumer electronics designed in country B.

Sometimes the relationship is more complicated. For example, country C might gain a competitive advantage over country D in the creation of textiles, or in the production of oil, obliging country D to find new ways to distinguish itself on the world market. And in these cases, sometimes country D could find itself being left behind, as a country.

Just as the fast improvements in artificial intelligence and other technologies are complicating the operation of national economies, they are also complicating the operation of the international economy:

  • Countries which used to earn valuable income from overseas due to their remote workers in fields such as software development, financial analysis, and help desks, will find that the same tasks can now be performed better by AI systems, removing the demand for offshore personnel and temporary worker visas
  • Countries whose products and services were previously “nearly good enough” will find that they increasingly lose out to products and services provided by other countries, on account of faster transmission of both electronic and physical goods
  • The tendencies within countries for the successful companies to be increasingly wealthy, leaving others behind, will be mirrored at the international level: successful countries will become increasingly powerful, leaving others behind.

Just as the local versions of these tensions pose problems inside countries, the international versions of these tensions pose problems at the geopolitical level. In both cases, the extreme possibility is that a minority of angry, alienated people might unleash a horrific campaign of terror. A less extreme possibility – which is still one to be avoided – is to exist in a world full of bitter resentment, hostile intentions, hundreds of millions of people seeking to migrate to more prosperous countries, and borders which are patrolled to avoid uninvited immigration.

Just as there is a variety of possible responses to the scenario of the Economic Singularity within one country, there is a similar variety of possible responses to the international version of the problem:

  1. Disbelieve that there is any fundamental new challenge arising. Tell people in countries around the world that their destiny is within their own hands; all they need to do is buckle down, reskill, and find new ways of bringing adequate income to their countries
  2. Accept that there will be many countries losing out, and take comprehensive steps to ensure that migration is carefully controlled
  3. Resist the growth in the use of intelligent automation technologies in industries that are particularly important to various third world countries
  4. Urge people in third world countries to plan to simplify their lifestyles, preparing to exist at a lower degree of flourishing than, say, in the US and the EU, but finding alternative pathways to personal satisfaction
  5. Enhance the mechanisms used globally for the redistribution of the fruits of technology.

You won’t be surprised to hear that I recommend, again, the “enhance” option from this list.

What underpins that conclusion is my prediction that the fruits of forthcoming technological improvements won’t just be sufficient for a good quality of life in a few countries. They’ll enable a good quality of life for everyone all around the world.

I’m thinking of the revolutions that are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities:

  • Nanotech will transform the fields of energy and manufacturing
  • Biotech will transform the fields of agriculture and healthcare
  • Cognotech will transform the fields of education and entertainment
  • Infotech will, by enabling greater application of intelligence, accelerate all the above improvements (and more).

But, once again, these developments will take time. Just as national economies cannot, overnight, move to a new phase in which abundance completely replaces scarcity, so also will the transformation of the international economy require a number of stages. It is the turbulent transitional stages that will prove the most dangerous.

Once again, my recommendation for the best way forwards is the mixed model – local autonomy, aided and supported by an evolving international framework. It’s not a question of top-down control versus bottom-up emergence. It’s a question of utilising both these forces.

Once again, wise use of new technology can enhance how this mixed model operates.

Once again, it will be new metrics that guide us in our progress forward. The UN’s framework of SDGs – sustainable development goals – is a useful starting point, but it sets the bar too low. Rather than (in effect) considering “sustainability with less”, it needs to more vigorously embrace “sustainability with more” – or as I have called it, “Sustainable superabundance for all”.

B9: Anticipating a new mindset

The vision of the near future that I’ve painted may strike some readers as hopelessly impractical. Critics may say:

  • “Countries will never cooperate sufficiently, especially when they have very different political outlooks”
  • “Even within individual countries, the wealthy will resist parts of their wealth being redistributed to the rest of the population”
  • “Look, the world is getting worse – by many metrics – rather than getting better”.

But here’s why I predict that positive changes can accelerate.

First, alongside the metrics of deterioration in some aspects of life, there are plenty of metrics of improvement. Things are getting better at the same time as other things are getting worse. The key question is whether the things getting better can assist with a sufficiently quick reversal of the things that are getting worse.

Second, history has plenty of examples of cooperation between groups of people that previously felt alien or hostile toward each other. What catalyses collaboration is the shared perception of enormous transcendent challenges and opportunities. It’s becoming increasingly clear to governments of all stripes around that world that, if tomorrow’s technology goes wrong, it could prove catastrophic in so many ways. That shared realisation has the potential to inspire political and other leaders to find new methods for collaboration and reconciliation.

As an example, consider various unprecedented measures that followed the tragedies of the Second World War:

  • Marshall Plan investments in Europe and Japan
  • The Bretton Woods framework for economic stability
  • The International Monetary Fund and the World Bank
  • The United Nations.

Third, it’s true that political and other leaders frequently become distracted. They may resolve, for a brief period of time, to seek new international methods for dealing with challenges like the Economic Singularity, but then rush off to whatever new political scandal takes their attention. Accordingly, we should not expect politicians to solve these problems by themselves. But what we can expect them to do is to ask their advisors for suggestions, and these advisors will in turn look to futurist groups around the world for assistance.

C: The vital questions arising

Having laid out my analysis, it’s time to ask for feedback. After all, collaborative intelligence can achieve much more than individual intelligence.

So, what are your views? Do you have anything to add or change regarding the various accounts given above:

  1. Assessments of growing societal inequality
  2. Assessments of the role of new technologies in increasing (or decreasing) inequality
  3. The likely ability of automation technologies, before long, to handle non-routine tasks, including compassion, creativity, and common sense
  4. The plausibility of the “Turing Trap” analysis
  5. Repeated delays in the replacement of GDP with more suitable all-round measures of human flourishing
  6. The ways in which new forms of AI could supercharge centralised planning
  7. The reasons why some recipients of welfare squander the payments they receive
  8. The uses of new technology to address poor emotional health
  9. Plausible vs implausible methods to cover the cost of a UGI (Universal Generous Income)
  10. Goods and services that are critical to sustained personal wellbeing that, however, seem likely to remain expensive for the foreseeable future
  11. Likely cash flows between different countries to enable something like a GUGI (Global Universal Generous Income)
  12. The best ways to catch the attention of society’s leaders so that they understand that the Economic Singularity is an issue that is both pressing and important
  13. The possibility of substantial agreements between countries that have fiercely divergent political systems
  14. The practical implementation of systems that combine top-down and bottom-up approaches to global cooperation
  15. Broader analysis of major trends in the world that capture both what is likely to improve and what is likely to become worse, and of how these various trends could interact
  16. Factors that might undermine the above analysis but which deserve further study.

These are what I call the vital questions regarding possible solutions to the Economic Singularity. They need good answers!

D: References and further reading

This essay first appeared in the 2023 book Future Visions: Approaching the Economic Singularity edited and published by the Omnifuturists. It is republished here with their permission, with some minor changes. Other chapters in that book explore a variety of alternative responses to the Economic Singularity.

The term “The Economic Singularity” was first introduced in 2016 by writer and futurist Calum Chace in the book with that name.

For a particularly good analysis of the issues arising, and why no simple solutions are adequate, see A World Without Work: Technology, Automation, and How We Should Respond by Oxford economist Daniel Susskind (2020).

A longer version of my argument against the “disbelieve” option is contained in Chapter 4, “Work and purpose”, of my 2018 book Transcending Politics: A Technoprogressive Roadmap to a Comprehensively Better Future.

Several arguments in this essay have been anticipated, from a Marxist perspective, by the book Fully Automated Luxury Communism by Aaron Bastani; see here for my extended review of that book.

A useful account of both the strengths and weaknesses of capitalism can be found in the 2020 book More from Less: The Surprising Story of How We Learned to Prosper Using Fewer Resources – and What Happens Next by Andrew McAfee.

The case for the mixed market – and why economic discussion is often distorted by anti-government rhetoric – is covered in American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity by Jacob S. Hacker and Paul Pierson (2016).

The case that adoption of technology often leads to social harms, including an increase in inequality – and the case for thoughtful governance of how technology is developed and deployed – is given in Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity by Daron Acemoglu and Simon Johnson (2023).

A marvellous positive account of human nature, especially when humans are placed in positions of trust, is contained in Humankind: A Hopeful History by Rutger Bregman.

Options for reform of geopolitics are discussed – along with prior encouraging examples – in chapter 14 of my 2021 book Vital Foresight: The Case For Active Transhumanism.

3 December 2023

“6 Mindblowing Predictions about 2024”

Filed under: Abundance, futurist, intelligence, vision — Tags: , , , , — David Wood @ 11:15 am

As we stand on the brink of 2024, the air is electric with anticipation. The future, often shrouded in mystery and conjecture, seems to beckon us with a mischievous grin, promising wonders and revelations that most of us haven’t even begun to imagine. I’m here to pull back the curtain, just a little, to reveal six mind-blowing predictions about 2024 that 99% of people don’t know about. Fasten your seatbelts, for we’re about to embark on a thrilling ride into the unknown!

[ Note: with the exception of this paragraph inside the square brackets, all the text (and formatting) in this article was created by GPT-4, and hasn’t been edited in the slightest by me. I offer this post as an example of what generative AI can achieve with almost no human effort. It’s far from what I would write personally, but it’s comparable to the fluff that seems to earn lots of so-called futurist writers lots of clicks. As for the images, they were all produced by Midjourney. The idea for this article came from this Medium article by Neeramitra Reddy. ]

1. The Rise of Personal AI Companions

Imagine waking up to a friendly voice that knows you better than anyone else, offering weather updates, reading out your schedule, and even cracking a joke or two to kickstart your day with a smile. In 2024, personal AI companions will move from science fiction to everyday reality. These AI entities will be more than just sophisticated algorithms; they’ll be digital confidantes, seamlessly integrating into our daily lives, offering personalized advice, and even helping us stay on top of our mental and physical health.

2. Green Energy Takes a Giant Leap

The year 2024 will witness a monumental shift in the global energy landscape. We’re not just talking about a few more solar panels and wind turbines here. We’re talking about a green energy revolution! Breakthroughs in solar cell technology will make harnessing the sun’s power more efficient than ever. Wind energy will see advancements in turbine designs, making them more powerful and less intrusive. Cities will start to glow with the promise of a cleaner, greener future, as renewable energy becomes more accessible and affordable than ever before.

3. The Emergence of Smart Cities

Picture a city that not only understands your needs but anticipates them. In 2024, the concept of ‘smart cities’ will evolve from buzzword to reality. We’re talking about urban areas equipped with sensors and AI-driven analytics to optimize everything from traffic flow to energy consumption. Public transport will become more efficient, pollution levels will drop, and city life will be smoother and more enjoyable. These smart cities will be a game-changer, significantly enhancing the quality of life for their residents.

4. Breakthroughs in Health Tech: Customized Medicine

Healthcare in 2024 is set to become more personalized than ever. With advancements in genomics and AI, doctors will be able to tailor treatments and medications to each individual’s genetic makeup. This means fewer side effects and more effective treatments. Imagine a world where your medication is designed specifically for you, maximizing its efficacy and minimizing its risks. This personalized approach will revolutionize how we think about medicine and healthcare.

5. The New Space Race: Commercial Space Travel

2024 could well be the year that space tourism takes off, quite literally. With companies like SpaceX and Blue Origin paving the way, we’re looking at the possibility of commercial space travel becoming a reality for those who dare to dream. Think about it – sipping a beverage while gazing at the Earth from space! This new space race isn’t just for the ultra-wealthy; it’s a stepping stone towards making space travel more accessible to everyone.

6. Virtual Reality: The New Frontier of Entertainment

Virtual reality (VR) is set to take the world of entertainment by storm in 2024. With advancements in technology, VR experiences will become more immersive and interactive, transcending the boundaries of traditional entertainment. Imagine being able to step into your favorite movie, interact with characters, or even alter the storyline. VR will offer an escape into fantastical worlds, making our entertainment experiences more intense and personal.

As we gear up for 2024, it’s clear that we’re on the cusp of a new era. An era defined by technological marvels that promise to reshape our world in ways we can barely begin to fathom. These six predictions are just the tip of the iceberg. The future is a canvas of endless possibilities, and 2024 is poised to paint a picture that’s vibrant, exhilarating, and positively mind-blowing.

So, there you have it – a glimpse into the not-so-distant future that’s brimming with potential and promise. As we inch closer to 2024, let’s embrace these changes with open arms and curious minds. The future is ours to shape, and it’s looking brighter than ever!

2 September 2023

Bletchley Park: Seven dangerous failure modes – and how to avoid them

Filed under: Abundance, AGI, Events, leadership, London Futurists — Tags: , , — David Wood @ 7:13 am

An international AI Safety Summit is being held on 1st and 2nd November at the historic site of Bletchley Park, Buckinghamshire. It’s convened by none other than the UK’s Prime Minister, Rishi Sunak.

It’s a super opportunity for a much-needed global course correction in humanity’s relationship with the fast-improving technology of AI (Artificial Intelligence), before AI passes beyond our understanding and beyond our control.

But when we look back at the Summit in, say, two years time, will we assess it as an important step forward, or as a disappointing wasted opportunity?

(Image credit: this UK government video)

On the plus side, there are plenty of encouraging words in the UK government’s press release about the Summit:

International governments, leading AI companies and experts in research will unite for crucial talks in November on the safe development and use of frontier AI technology, as the UK Government announces Bletchley Park as the location for the UK summit.

The major global event will take place on the 1st and 2nd November to consider the risks of AI, especially at the frontier of development, and discuss how they can be mitigated through internationally coordinated action. Frontier AI models hold enormous potential to power economic growth, drive scientific progress and wider public benefits, while also posing potential safety risks if not developed responsibly.

To be hosted at Bletchley Park in Buckinghamshire, a significant location in the history of computer science development and once the home of British Enigma codebreaking – it will see coordinated action to agree a set of rapid, targeted measures for furthering safety in global AI use.

Nevertheless, I’ve seen several similar vital initiatives get side-tracked in the past. When we should be at our best, we can instead be overwhelmed by small-mindedness, by petty tribalism, and by obsessive political wheeling and dealing.

Since the stakes are so high, I’m compelled to draw attention, in advance, to seven ways in which this Summit could turn out to be a flop.

My hope is that my predictions will become self non-fulfilling.

1.) Preoccupation with easily foreseen projections of today’s AI

It’s likely that AI in just 2-3 years will possess capabilities that surprise even the most far-sighted of today’s AI developers. That’s because, as we build larger systems of interacting artificial neurons and other computational modules, the resulting systems are displaying unexpected emergent features.

Accordingly, these systems are likely to possess new ways (and perhaps radically new ways) of:

  • Observing and forecasting
  • Spying and surveilling
  • Classifying and targeting
  • Manipulating and deceiving.

But despite their enhanced capabilities, these systems may still on occasion miscalculate, hallucinate, overreach, suffer from bias, or fail in other ways – especially if they can be hacked or jail-broken.

Just because some software is super-clever, it doesn’t mean it’s free from all bugs, race conditions, design blind spots, mistuned configurations, or other defects.

What this means is that the risks and opportunities of today’s AI systems – remarkable as they are – will likely be eclipsed by the risks and opportunities of the AI systems of just a few years’ time.

A seemingly unending string of pundits are ready to drone on and on about the risks and opportunities of today’s AI systems. Yes, these conversations are important. However, if the Summit becomes preoccupied by those conversations, and gives insufficient attention to the powerful disruptive new risks and opportunities that may arise shortly afterward, it will have failed.

2.) Focusing only on innovation and happy talk

We all like to be optimistic. And we can tell lots of exciting stories about the helpful things that AI systems will be able to do in the near future.

However, we won’t be able to receive these benefits if we collectively stumble before we get there. And the complications of next generation AI systems mean that a number of dimly understood existential landmines stand in our way:

  • If the awesome powers of new AI are used for malevolent purposes by bad actors of various sorts
  • If an out-of-control race between well-meaning competitors (at either the commercial or geopolitical level) results in safety corners being cut, with disastrous consequences
  • If perverse economic or psychological incentives lead people to turn a blind eye to risks of faults in the systems they create
  • If an AI system that has an excellent design and implementation is nevertheless hacked into a dangerous alternative mode
  • If an AI system follows its own internal logic to conclusions very different from what the system designers intended (this is sometimes described as “the AI goes rogue”).

In short, too much happy talk, or imprecise attention to profound danger modes, will cause the Summit to fail.

3.) Too much virtue signalling

One of the worst aspects of meetings about the future of AI is when attendees seem to enter a kind of virtue competition, uttering pious phrases such as:

  • We believe AI must be fair”
  • We believe AI must be just”
  • We believe AI must avoid biases”
  • We believe AI must respect human values”

This is like Nero fiddling whilst Rome burns.

What the Summit must address are the very tangible threats of AI systems being involved in outcomes much worse than groups of individuals being treated badly. What’s at stake here is, potentially, the lives of hundreds of millions of people – perhaps more – depending on whether an AI-induced catastrophe occurs.

The Summit is not the place for holier-than-thou sanctimonious puff. Facilitators should make that clear to all participants.

4.) Blindness to the full upside of next generation AI

Whilst one failure mode is to underestimate the scale of catastrophic danger that next generation AI might unleash, another failure mode is to underestimate the scale of profound benefits that next generation AI could provide.

What’s within our grasp isn’t just a potential cure for, say, one type of cancer, but a potential cure for all chronic diseases, via AI-enabled therapies that will comprehensively undo the biological damage throughout our bodies that we normally call aging.

Again, what’s within our grasp isn’t just ways to be more efficient and productive at work, but ways in which AI will run the entire economy on our behalf, generating a sustainable superabundance for everyone.

Therefore, at the same time as huge resources are being marshalled on two vital tasks:

  • The creation of AI superintelligence
  • The creation of safe AI superintelligence

we should also keep clearly in mind one additional crucial task:

  • The creation of AI superbenevolence

5.) Accepting the wishful thinking of Big Tech representatives

As Upton Sinclair highlighted long ago, “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

The leadership of Big Tech companies are generally well-motivated: they want their products to deliver profound benefits to humanity.

Nevertheless, they are inevitably prone to wishful thinking. In their own minds, their companies will never make the kind of gross errors that happened at, for example, Union Carbide (Bhopal disaster), BP (Deepwater Horizon disaster), NASA (Challenger and Columbia shuttle disasters), or Boeing (737 Max disaster).

But especially in times of fierce competition (such as the competition to be the web’s preferred search tool, with all the vast advertising revenues arising), it’s all too easy for these leaders to turn a blind eye, probably without consciously realising it, to significant disaster possibilities.

Accordingly, there must be people at the Summit who are able to hold these Big Tech leaders to sustained serious account.

Agreements for “voluntary” self-monitoring of safety standards will not be sufficient!

6.) Not engaging sufficiently globally

If an advanced AI system goes wrong, it’s unlikely to impact just one country.

Given the interconnectivity of the world’s many layers of infrastructure, it’s critical that the solutions proposed by the Summit have a credible roadmap to adoption all around the world.

This is not a Summit where it will be sufficient to persuade the countries who are already “part of the choir”.

I’m no fan of diversity-for-diversity’s-sake. But on this occasion, it will be essential to transcend the usual silos.

7.) Insufficient appreciation of the positive potential of government

One of the biggest myths of the last several decades is that governments can make only a small difference, and that the biggest drivers for lasting change in the world are other forces, such as the free-market, military power, YouTube influencers, or popular religious sentiment.

On the contrary, with a wise mix of incentives and restrictions – subsidies and penalties – government can make a huge difference in the well-being of society.

Yes, national industrial policy often misfires, due to administrative incompetence. But there are better examples, where inspirational government leadership transformed the entire operating environment.

The best response to the global challenge of next generation AI will involve a new generation of international political leaders demonstrating higher skills of vision, insight, agility, collaboration, and dedication.

This is not the time for political lightweights, blowhards, chancers, or populist truth-benders.

Footnote: The questions that most need to be tabled

London Futurists is running a sequence of open surveys into scenarios for the future of AI.

Round one has concluded. Round two has just gone live (here).

I urge everyone concerned about the future of AI to take a look at that new survey, and to enter their answers and comments into the associated Google Form.

That’s a good way to gain a fuller appreciation of the scale of the issues that should be considered at Bletchley Park.

That will reduce the chance that the Summit is dominated by small-mindedness, by petty tribalism, or by politicians merely seeking a media splash. Instead, it will raise the chance that the Summit seriously addresses the civilisation-transforming nature of next generation AI.

Finally, see here for an extended analysis of a set of principles that can underpin a profoundly positive relationship between humanity and next generation AI.

15 May 2022

Timeline to 2045: questions answered

This is a follow-up to my previous post, containing more of the material that I submitted around five weeks ago to the FLI World Building competition. In this case, the requirement was to answer 13 questions, with answers limited to 250 words in each case.

Q1: AGI has existed for years, but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?

The Global AGI safety project was one of the most momentous and challenging in human history.

The centrepiece of that project was the set of “Singularity Principles” that had first appeared in print in the book Vital Foresight in 2021, and which were developed in additional publications in subsequent years – a set of recommendations with the declared goal of increasing the likelihood that oncoming disruptive technological changes would have outcomes that are profoundly positive for humanity, rather than deeply detrimental. The principles split into four sections:

  1. A focus, in advance, on the goals and outcomes that were being sought from particular technologies
  2. Analysis of the intrinsic characteristics that are desirable in technological solutions
  3. Analysis of methods to ensure that development takes place responsibly
  4. And a meta-analysis – principles about how this overall set of recommendations could itself evolve further over time, and principles for how to increase the likelihood that these recommendations would be applied in practice rather than simply being some kind of wishful thinking.

What drove increasing support for these principles was a growing awareness, shared around the world, of the risks of cataclysmic outcomes that could arise all too easily from increasingly powerful AI, even when everyone involved had good intentions. This shared sense of danger caused even profound ideological enemies to gather together on a regular basis to review joint progress toward fulfilment of the Singularity Principles, as well as to evolve and refine these Principles.

Q2: The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?

One of the key principles programmed into every advanced AI, from the late 2020s onward, was that no AI should seize or manipulate resources owned by any other AI. Instead, AIs should operate only with resources that have been explicitly provided to them. That prevented any hostile takeover of less capable AIs by more powerful competitors. Accordingly, a community of different AIs coexisted, with differing styles and capabilities.

However, in parallel, the various AIs naturally started to interact with each other, offering services to each other in response to expressions of need. The outcome of this interaction was a blurring of the boundaries between different AIs. Thus, by the 2040s, it was no longer meaningful to distinguish between what had originally been separate pieces of software. Instead of referring to “the Alphabet AGI” or “the Tencent AGI”, and so on, people just talked about “the AGI” or even “AGI”.

The resulting AGI was, however, put to different purposes in different parts of the world, dependent on the policies pursued by the local political leaders.

Q3: How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?

The 2020s were a decade of turbulence, in which a number of arms races proceeded at pace, and when conflict several times came close to spilling over from being latent and implied (“cold”) to being active (“hot”):

  • The great cyber war of 2024 between Iran and Israel
  • Turmoil inside many countries in 2026, associated with the fall from power of the president of Russia
  • Exchanges of small numbers of missiles between North and South Korea in 2027
  • An intense cyber battle in 2028 over the future of an independent Taiwan.

These conflicts resulted in a renewed “never again” global focus to avoid any future recurrences. A new generation of political leaders resolved that, regardless of their many differences, they would put particular kinds of weapons beyond use.

Key to this “never again” commitment was an agreement on “global AI monitoring” – the use of independent narrow AIs to monitor all developments and deployments of potential weapons of mass destruction. That agreement took inspiration from previous international agreements that instituted regular independent monitoring of chemical and biological weapons.

Initial public distrust of the associated global surveillance systems was overcome, in stages, by demonstrations of the inherently trustworthy nature of the software used in these systems – software that adapted various counterintuitive but profound cryptographic ideas from the blockchain discussions of the early and mid-2020s.

Q4: In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that, if at all?

Between 2024 and 2032, the US switched its politics from a troubled bipolar system, with Republicans and Democrats battling each other with intense hostility, into a multi-party system, with a dynamic fluidity of new electoral groupings. The winner of the 2032 election was, for the first time since the 1850s, from neither of the formerly dominant parties. What enabled this transition was the adoption, in stages, of ranked choice voting, in which electors could indicate a sequence of which candidates they preferred. This change enabled electors to express interest in new parties without fearing their votes would be “wasted” or would inadvertently allow the election of particularly detested candidates.

The EU led the way in adoption of a “house of AI” as a reviewing body for proposed legislation. Legislation proposed by human politicians was examined by AI, resulting in suggested amendments, along with detailed explanations from the AI of reasons for making these changes. The EU left the ultimate decisions – whether or not to accept the suggestions – in the hands of human politicians. Over time, AI judgements were accepted on more and more occasions, but never uncritically.

China remained apprehensive until the mid-2030s about adopting multi-party politics with full tolerance of dissenting opinions. This apprehension was rooted in historic distrust of the apparent anarchy and dysfunction of politicians who needed to win approval of seemingly fickle electors. However, as AI evidently improved the calibre of online public discussion, with its real-time fact-checking, the Chinese system embraced fuller democratic reforms.

Q5: Is the global distribution of wealth (as measured say by national or international Gini coefficients) more, or less, unequal than 2022’s, and by how much? How did it get that way?

The global distribution of wealth became more unequal during the 2020s before becoming less unequal during the 2030s.

Various factors contributed to inequality increasing:

  • “Winner takes all”: Companies offering second-best products were unable to survive in the marketplace. Swift flows of both information and goods meant that all customers knew about better products and could easily purchase them
  • Financial rewards from the successes of companies increasingly flowed to the owners of the capital deployed, rather than to the people supplying skills and services. That’s because more of the skills and services could be supplied by automation, driving down the salaries that could be claimed by people who were offering the same skills and services
  • The factors that made some products better than others increasingly involved technological platforms, such as the latest AI systems, that were owned by a very small number of companies
  • Companies were able to restructure themselves ingeniously in order to take advantage of tax loopholes and special deals offered by countries desperate for at least some tax revenue.

What caused these trends to reverse was, in short, better politics:

  • Smart collaboration between the national governments of the world, avoiding tax loopholes
  • Recognition by greater numbers of voters of the profound merits of greater redistribution of the fruits of the remarkable abundance of NBIC technologies, as the percentage of people in work declined, and as the problems were more fully recognised of parts of society being “left behind”.

Q6: What is a major problem that AI has solved in your world, and how did it do so?

AI made many key contributions toward the solution of climate change:

  • By enabling more realistic and complete models of all aspects of the climate, including potential tipping points ahead of major climate phase transitions
  • By improving the design of alternative energy sources, including ground-based geothermal, high-altitude winds, ocean-based waves, space-based solar, and several different types of nuclear energy
  • Very significantly, by accelerating designs of commercially meaningful nuclear fusion
  • By identifying the types of “negative emissions technologies” that had the potential to scale up quickly in effectiveness
  • By accelerating the adoption of improved “cultivated meat” as sources of food that had many advantages over methods of animal-based agriculture, namely, addressing issues with land use, water use, antibiotics use, and greenhouse gas emissions, and putting an end to the vile practice of the mass slaughter of sentient creatures
  • By assisting the design of new types of cement, glass, plastics, fertilisers, and other materials whose manufacture had previously caused large emissions of greenhouse gases
  • By recommending the sorts of marketing messages that were most effective in changing the minds of previous opponents of effective action.

To be clear, AI did this as part of “NBIC convergence”, in which there are mutual positive feedback loops between progress in each of nanotech, biotech, infotech, and cognotech.

Q7: What is a new social institution that has played an important role in the development of your world?

The G7 group of the democratic countries with the largest economies transitioned in 2023 into the D16, with a sharper commitment than before to championing the core values of democracy: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

The D16 was envisioned from the beginning as intended to grow in size, to become a global complement to the functioning of the United Nations, able to operate in circumstances that would have resulted in a veto at the UN from countries that paid only lip service to democracy.

One of the first projects of the D16 was to revise the Universal Declaration of Human Rights from the form initially approved by the United Nations General Assembly in 1948, to take account of the opportunities and threats from new technologies, including what are known as “transhuman rights”.

In parallel, another project reached agreement on how to measure an “Index of Human Flourishing”, that could replace the economic measure GDP (Gross Domestic Product) as the de-facto principal indication of wellbeing of societies.

The group formally became the D40 in 2030 and the D90 in 2034. By that time, the D90 was central to agreements to vigorously impose an updated version of the Singularity Principles. Any group anywhere in the world – inside or outside the D90 – that sought to work around these principles, was effectively shut down due to strict economic sanctions.

Q8: What is a new non-AI technology that has played an important role in the development of your world?

Numerous fields have been transformed by atomically precise manufacturing, involving synthetic nanoscale assembly factories. These had been envisioned in various ways by Richard Feynman in 1959 and Eric Drexler in 1986, but did not become commercially viable until the early 2030s.

It had long been recognised that an “existence proof” for nanotechnology was furnished by the operation of ribosomes inside biological cells, with their systematic assembly of proteins from genetic instructions. However, creation of comparable synthetic systems needed to wait for assistance in both design and initial assembly from increasingly sophisticated AI. (DeepMind’s AlphaFold software had given an early indication of these possibilities back in 2021.) Once the process had started, significant self-improvement loops soon accelerated, with each new generation of nanotechnology assisting in the creation of a subsequent better generation.

The benefits flowed both ways: nanotech precision allowed breakthroughs in the manufacture of new types of computer hardware, including quantum computers; these in turn supported better types of AI algorithms.

Nanotech had dramatic positive impact on practices in the production of food, accommodation, clothing, and all sorts of consumer goods. Three areas particularly deserve mention:

  • Precise medical interventions, to repair damage to biological systems
  • Systems to repair damage to the environment as a whole, via a mixture of recycling and regeneration, as well as “negative emissions technologies” operating in the atmosphere
  • Clean energy sources operating at ever larger scale, including atomic-powered batteries

Q9: What changes to the way countries govern the development and/or deployment and/or use of emerging technologies (including AI), if any, played an important role in the development of your world?

Effective governance of emerging technologies involved both voluntary cooperation and enforced cooperation.

Voluntary cooperation – a desire to avoid actions that could lead to terrible outcomes – depended in turn on:

  • An awareness of the risk pathways – similar to the way that Carl Sagan and his colleagues vividly brought to the attention of world leaders in the early 1980s the potential global catastrophe of “nuclear winter”
  • An understanding that the restrictions being accepted would not hinder the development of truly beneficial products
  • An appreciation that everyone was be compelled to observe the same restrictions, and couldn’t gain some short-sighted advantage by breaching the rules.

The enforcement elements depended on:

  • An AI-powered “trustable monitoring system” that was able to detect, through pervasive surveillance, any potential violations of the published restrictions
  • Strong international cooperation, by the D40 and others, to isolate and remove resources from any maverick elements, anywhere in the world, that failed to respect these restrictions.

Public acceptance of trustable monitoring accelerated once it was understood that the systems performing the surveillance could, indeed, be trusted; they would not confer any inappropriate advantage on any grouping able to access the data feeds.

The entire system was underpinned by a vibrant programme of research and education (part of a larger educational initiative known as the “Vital Syllabus”), that:

  • Kept updating the “Singularity Principles” system of restrictions and incentives in the light of improved understanding of the risks and solutions
  • Ensured that the importance of these principles was understood both widely and deeply.

Q10: Pick a sector of your choice (education, transport, energy, communication, finance, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed with AI in your world.

For most of human history, religion had played a pivotal role in shaping people’s outlooks and actions. Religion provided narratives about ultimate purposes. It sanctified social structures. It highlighted behaviour said to be exemplary, as demonstrated in the lives of key religious figures. And it deplored other behaviours said to lead to very bad consequences, if not in the present life, then in an assumed afterlife.

Nevertheless, the philosophical justifications for religions had come under increasing challenge in recent times, with the growth of appreciation of a scientific worldview (including evolution by natural selection), the insights from critical analysis of previously venerated scriptures, and a stark awareness of the tensions between different religions in a multi-polar world.

The decline of influence of religion had both good and bad consequences. Greater freedom of thought and action was accompanied by a shrinking of people’s mental horizons. Without the transcendent appeal of a religious worldview, people’s lives often became dominated instead by egotism or consumerism.

The growth of the transhumanist movement in the 2020s provided one counter to these drawbacks. It was not a religion in the strict sense, but its identification of solutions such as “the abolition of aging”, “paradise engineering”, and “technological resurrection” stirred deep inner personal transformations.

These transformations reached a new level thanks to AGI-facilitated encounters with religious founders, inside immersive virtual reality simulations. New hallucinogenic substances provided extra richness to these experiences. The sector formerly known as “religion” therefore experienced an unexpected renewal. Thank AGI!

Q11: What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2022?

In response to the question, “How much longer do you expect to live”, the usual answer is “at least another hundred years”.

This answer reflects a deep love of life: people are glad to be alive and have huge numbers of quests, passions, projects, and personal voyages that they are enjoying or to which they’re looking forward. The answer also reflects the extraordinary observation that, these days, very few people die. That’s true in all sectors of society, and in all countries of the world. Low-cost high-quality medical treatments are widely available, to reverse diseases that were formerly fatal, and to repair biological damage that had accumulated earlier in people’s lives. People not only live longer but become more youthful.

The core ideas behind these treatments had been clear since the mid-2020s. Biological metabolism generates as a by-product of its normal operation an assortment of damage at the cellular and intercellular levels of the body. Biology also contains mechanisms for the repair of such damage, but over time, these repair mechanisms themselves lose vitality. As a result, people manifest various so-called “hallmarks of aging”. However, various interventions involving biotech and nanotech can revitalise these repair mechanisms. Moreover, other interventions can replace entire biological systems, such as organs, with bio-synthetic alternatives that actually work better than the originals.

Such treatments were feared and even resisted for a while, by activists such as the “naturality advocates”, but the evident improvements these treatments enabled soon won over the doubters.

Q12: In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

In a second country of your choice, which rights are better and which rights are worse respected in your world than in 2022, and why/how?

Regarding the famous phrase, “Everyone has the right to life, liberty and security of person”, all three of these fundamental rights are upheld much more fully, around the world, in 2045 than in 2022:

  • “Life” no longer tends to stop around the age of seventy or eighty; even people aged well over one hundred look forward to continuing to enjoy the right to life
  • “Liberty” involves more choices about lifestyles, personal philosophy, morphological freedom (augmentation and variation of the physical body) and sociological freedom (new structures for families, social groupings, and self-determined nations); importantly, these are not just “choices in theory” but are “choices in practice”, since means are available to support these modifications
  • “Security” involves greater protection from hazards such as extreme weather, pandemics, criminal enterprises, infrastructure hacking, and military attacks.

These improvements in the observation of rights are enabled by technologies of abundance, operated within a much-improved political framework.

Obtaining these benefits involved people agreeing to give up various possible actions that would have led to fewer freedoms and rights overall:

  • “Rights” to pollute the environment or to inflict other negative externalities
  • “Rights” to restrict the education of their girl children
  • “Rights” to experiment with technology without a full safety analysis being concluded.

For a while, some countries like China provided their citizens with only a sham democracy, fearing an irresponsible exercise of that freedom. But by the mid-2030s, that fear had dissipated, and people in all countries gained fuller participatory rights in governance and lifestyle decisions.

Q13: What’s been a notable trend in the way that people are finding fulfilment?

For most of history, right up to the late 2020s, many people viewed themselves through the prism of their occupation or career. “I’m a usability designer”, they might have said. Or “I’m a data scientist” or “I’m a tour guide”, and so on. Their assessment of their own value was closely linked to the financial rewards they obtained from being an employee.

However, as AI became more capable of undertaking all aspects of what had previously been people’s jobs – including portions involving not only diligence and dexterity but also creativity and compassion – there was a significant decline in the proportion of overall human effort invested in employment. By the late 2030s, most people had stopped looking for paid employment, and were content to receive “universal citizens’ dividend” benefits from the operation of sophisticated automated production facilities.

Instead, more and more people found fulfilment by pursuing any of an increasing number of quests and passions. These included both solitary and collaborative explorations in music, art, mathematics, literature, and sport, as well as voyages in parts of the real world and in myriads of fascinating shared online worlds. In all these projects, people found fulfilment, not by performing better than an AI (which would be impossible), but by improving on their own previous achievements, or in friendly competition with acquaintances.

Careful prompting by the AGI helps to maintain people’s interest levels and a sense of ongoing challenge and achievement. AGI has proven to be a wonderful coach.

A year-by-year timeline to 2045

The ground rules for the worldbuilding competition were attractive:

  • The year is 2045.
  • AGI has existed for at least 5 years.
  • Technology is advancing rapidly and AI is transforming the world sector by sector.
  • The US, EU and China have managed a steady, if uneasy, power equilibrium.
  • India, Africa and South America are quickly on the ride as major players.
  • Despite ongoing challenges, there have been no major wars or other global catastrophes.
  • The world is not dystopian and the future is looking bright.

Entrants were asked to submit four pieces of work. One was a new media piece. I submitted this video:

Another required piece was:

timeline with entries for each year between 2022 and 2045 giving at least two events (e.g. “X invented”) and one data point (e.g. “GDP rises by 25%”) for each year.

The timeline I created dovetailed with the framework from the above video. Since I enjoyed creating it, I’m sharing my submission here, in the hope that it may inspire readers.

(Note: the content was submitted on 11th April 2022.)

2022

US mid-term elections result in log-jammed US governance, widespread frustration, and a groundswell desire for more constructive approaches to politics.

The collapse of a major crypto “stablecoin” results in much wider adverse repercussions than was generally expected, and a new social appreciation of the dangers of flawed financial systems.

Data point: Number of people killed in violent incidents (including homicides and armed conflicts) around the world: 590,000

2023

Fake news that is spread by social media driven by a new variant of AI provokes riots in which more than 10,000 people die, leading to much greater interest a set of “Singularity Principles” that had previously been proposed to steer the development of potentially world-transforming technologies.

G7 transforms into the D16, consisting of the world’s 16 leading democracies, proclaiming a profound shared commitment to champion norms of: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 6.4%

2024

South Korea starts a trial of a nationwide UBI scheme, in the first of what will become in later years a long line of increasingly robust “universal citizens’ dividends” schemes around the world.

A previously unknown offshoot of ISIS releases a bioengineered virus. Fortunately, vaccines are quickly developed and deployed against it. In parallel, a bitter cyber war takes place between Iran and Israel. These incidents lead to international commitments to prevent future recurrences.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 38%

2025

Extreme weather – floods and storms – kills 10s of 1000s in both North America and Europe. A major trial of geo-engineering is rushed through, with reflection of solar radiation in the stratosphere – causing global political disagreement and then a renewed determination for tangible shared action on climate change.

The US President appoints a Secretary for the Future as a top-level cabinet position. More US states adopt rank choice voting, allowing third parties to grow in prominence.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 38%

2026

A song created entirely by an AI tops the hit parade, and initiates a radical new musical genre.

Groundswell opposition to autocratic rule in Russia leads to the fall from power of the president and a new dedication to democracy throughout countries formerly perceived as being within Russia’s sphere of direct influence.

Data point: Net greenhouse gas emissions (including those from land-use changes): 59 billion tons of CO2 equivalent – an unwelcome record.

2027

Metformin approved for use as an anti-aging medicine in a D16 country. Another D16 country recommends nationwide regular usage of a new nootropic drug.

Exchanges of small numbers of missiles between North and South Korea leads to regime change inside North Korea and a rapprochement between the long-bitter enemies.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 9.2%

2028

An innovative nuclear fusion system, with its design assisted by AI, runs for more than one hour and generates significantly more energy out than what had been put in.

As a result of disagreements about the future of an independent Taiwan, an intense destructive cyber battle takes place. At the end, the nations of the world commit more seriously than before to avoiding any future cyber battles.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 41%

2029

A trial of an anti-aging intervention in middle-aged dogs is confirmed to have increased remaining life expectancy by 25% without causing any adverse side effects. Public interest in similar interventions in humans skyrockets.

The UK rejoins a reconfigured EU, as an indication of support for sovereignty that is pooled rather than narrow.

Data point: Proportion of world population with formal cryonics arrangements: 1 in 100,000

2030

Russia is admitted into the D40 – a newly expanded version of the D16. The D40 officially adopts “Index of Human Flourishing” as more important metric than GDP, and agrees a revised version of the Universal Declaration of Human Rights, brought up to date with transhuman issues.

First permanent implant in a human of an artificial heart with a new design that draws all required power from the biology of the body rather than any attached battery, and whose pace of operation is under the control of the brain.

Data point: Net greenhouse gas emissions (including those from land-use changes): 47 billion tons of CO2 equivalent – a significant improvement

2031

An AI discovers and explains a profound new way of looking at mathematics, DeepMath, leading in turn to dramatically successful new theories of fundamental physics.

Widespread use of dynamically re-programmed nanobots to treat medical conditions that would previously have been fatal.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 23%

2032

First person reaches the age of 125. Her birthday celebrations are briefly disrupted by a small group of self-described “naturality advocates” who chant “120 is enough for anyone”, but that group has little public support.

D40 countries put in place a widespread “trustable monitoring system” to cut down on existential risks (such as spread of WMDs) whilst maintaining citizens’ trust.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 35.7% 

2033

For the first time since the 1850s, the US President comes from a party other than Republican and Democratic.

An AI system is able to convincingly pass the Turing test, impressing even the previous staunchest critics with its apparent grasp of general knowledge and common sense. The answers it gives to questions of moral dilemmas also impress previous sceptics.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 58%

2034

The D90 (expanded from the D40) agrees to vigorously impose Singularity Principles rules to avoid inadvertent creation of dangerous AGI.

Atomically precise synthetic nanoscale assembly factories have come of age, in line with the decades-old vision of nanotechnology visionary Eric Drexler, and are proving to have just as consequential an impact on human society as AI.

Data point: Net greenhouse gas *removals*: 10 billion tons of CO2 equivalent – a dramatic improvement

2035

A novel written entirely by an AI reaches the top of the New York Times bestseller list, and is widely celebrated as being the finest piece of literature ever produced.

Successful measures to remove greenhouse gases from the atmosphere, coupled with wide deployment of clean energy sources, lead to a declaration of “victory over runaway climate change”.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 4%

2036

A film created entirely by an AI, without any real human actors, wins Oscar awards.

The last major sceptical holdout, a philosophy professor from an Ivy League university, accepts that AGI now exists. The pope gives his blessing too.

Data point: Proportion of world population with cryonics arrangements: 24%

2037

The last instances of the industrial scale slaughter of animals for human consumption, on account of the worldwide adoption of cultivated (lab-grown) meat.

AGI convincingly explains that it is not sentient, and that it has a very different fundamental structure from that of biological consciousness.

Data point: Proportion of world population who are literate: 99.3%

2038

Rejuvenation therapies are in wide use around the world. “Eighty is the new fifty”. First person reaches the age of 130.

Improvements made by AGI upon itself effectively raise its IQ one hundred fold, taking it far beyond the comprehension of human observers. However, the AGI provides explanatory educational material that allows people to understand vast new sets of ideas.

Data point: Proportion of world population who consider themselves opposed to AGI: 0.1%

2039

An extensive set of “vital training” sessions has been established by the AGI, with all citizens over the age of ten participating for a minimum of seven hours per day on 72 days each year, to ensure that humans develop and maintain key survival skills.

Menopause reversal is common place. Women who had long ago given up any ideas of bearing another child happily embrace motherhood again.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 99.2%

2040

The use of “mind phones” is widespread: new brain-computer interfaces that allow communication between people by mental thought alone.

People regularly opt to have several of their original biological organs replaced by synthetic alternatives that are more efficient, more durable, and more reliable.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 96%

2041

Shared immersive virtual reality experiences include hyper-realistic simulations of long-dead individuals – including musicians, politicians, royalty, saints, and founders of religions.

The number of miles of journey undertaken by small “flying cars” exceeds that of ground-based powered transport.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 100.0%

2042

First successful revival of mammal from cryopreservation.

AGI presents a proof of the possibility of time travel, but the resources required for safe transit of humans through time would require the equivalent of building a Dyson sphere around the sun.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 0.4%

2043

First person reaches the age of 135, and declares herself to be healthier than at any time in the preceding four decades.

As a result of virtual reality encounters of avatars of founders of religion, a number of new systems of philosophical and mystical thinking grow in popularity.

Data point: Proportion of world’s energy provided by earth-based nuclear fusion: 75%

2044

First human baby born from an ectogenetic pregnancy.

Family holidays on the Moon are an increasingly common occurrence.

Data point: Average amount of their waking time that people spend in a metaverse: 38%

2045

First revival of human from cryopreservation – someone who had been cryopreserved ten years previously.

Subtle messages decoded by AGI from far distant stars in the galaxy confirm that other intelligent civilisations exist, and are on their way to reveal themselves to humanity.

Data point: Number of people killed in violent incidents around the world: 59

Postscript

My thanks go to the competition organisers, the Future of Life Institute, for providing the inspiration for the creation of the above timeline.

Readers are likely to have questions in their minds as they browse the timeline above. More details of the reasoning behind the scenarios involved are contained in three follow-up posts:

19 January 2020

The pace of change, 2020 to 2035

Filed under: Abundance, BHAG, RAFT 2035, vision — Tags: , , , — David Wood @ 10:05 am

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities.

I wrote these words on the opening page of RAFT 2035, my new book, which was published yesterday and is now available on Amazon sites worldwide (UK, US, DE, FR, ES, IT, NL, JP, BR, CA, MX, AU, IN).

Friends who read drafts of the book ahead of publication asked me:

RAFT envisions a huge amount of change taking place between the present day and 2035. What are the grounds for imagining this kind of change will be possible?

Here’s the answer I included in the final manuscript:

There is nothing inevitable about any of the changes foreseen by RAFT. It is even possible that the pace of change will slow down:

  • Due to a growing disregard for the principles of science and rationality
  • Due to society placing its priorities in other areas
  • Due to insufficient appetite to address hard engineering problems
  • Due to any of a variety of reversals or collapses in the wellbeing of civilisation.

On the other hand, it’s also possible that the pace of technological change as experienced by global society in the last 15 years – pace that is already breathtaking – could accelerate significantly in the next 15 years:

  • Due to breakthroughs in some fields (e.g. AI or nanotechnology) leading to knock-on breakthroughs in other fields
  • Due to a greater number of people around the world dedicating themselves to working on the relevant technologies, products, and services
  • Due to more people around the world reaching higher levels of education than ever before, being networked together with unprecedented productivity, and therefore being able to build more quickly on each other’s insights and findings
  • Due to new levels of application of design skills, including redesigning the user interfaces to complex products, and redesigning social systems to enable faster progress with beneficial technologies
  • Due to a growing public understanding of the potential for enormous benefits to arise from the NBIC technologies, provided resources are applied more wisely
  • Due to governments deciding to take massive positive action to increase investment in areas that are otherwise experiencing blockages – this action can be considered as akin to a nation moving onto a wartime footing.

Introducing RAFT 2035

Where there is no vision, the people perish.

That insight from the biblical book of Proverbs is as true today as ever.

Without an engaging vision of a better future, we tend to focus on the short-term and on the mundane. Our horizons shrink and our humanity withers.

RAFT 2035 offers an alternative:

  • Thanks to the thoughtful application of breakthroughs in science and technology, the future can be profoundly better than the present
  • 2035 could see an abundance of all-round human flourishing, with no-one left behind.

The word “abundance” here means that there will be enough for everyone to have an excellent quality of life. No one will lack access to healthcare, accommodation, nourishment, essential material goods, information, education, social engagement, free expression, or artistic endeavour.

RAFT 2035 envisions the possibility, by 2035, of an abundance of human flourishing in each of six sectors of human life:

  • Individual health and wellbeing
  • The wellbeing of social relationships
  • The quality of international relationships
  • Sustainable relationships with the environment
  • Humanity’s exploration of the wider cosmos beyond the earth
  • The health of our political systems.

RAFT offers clear goals for what can be accomplished in each of these six sectors by 2035 – 15 goals in total, for society to keep firmly in mind between now and that date.

The 15 goals each involve taking wise advantage of the remarkable capabilities of 21st century science and technology: robotics, biotech, neurotech, nanotech, greentech, artificial intelligence, collaboration technology, and much more.

The goals also highlight how the development and adoption of science and technology can, and must, be guided by the very best of human thinking and values.

Indeed, at the same time as RAFT 2035 upholds this vision, it is also fully aware of deep problems and challenges in each of the six sectors described.

Progress will depend on a growing number of people in all areas of society:

  • Recognising the true scale of the opportunity ahead
  • Setting aside distractions
  • Building effective coalitions
  • Taking appropriate positive actions.

These actions make up RAFT 2035. I hope you like it!

The metaphor and the acronym

The cover of RAFT 2035 depicts a raft sitting on top of waves of turbulence.

As I say in RAFT’s opening chapter, the forthcoming floods of technological and social change set in motion by the NBIC revolutions could turn our world upside down, more quickly and more brutally than we expected. When turbulent waters are bearing down fast, having a sturdy raft at hand can be the difference between life and death.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In other words, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

To spell out the “RAFT” acronym, the turbulent times ahead require:

  • A Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

What’s different about the RAFT vision

Most other political visions assume that only modest changes in the human condition will take place over the next few decades. In contrast, RAFT takes seriously the potential for large changes in the human condition – and sees these changes not only as desirable but essential.

Most other political visions are preoccupied by short term incremental issues. In contrast, RAFT highlights major disruptive opportunities and risks ahead.

Finally, most other political visions seek for society to “go back” to elements of a previous era, which is thought to be simpler, or purer, or in some other way preferable to the apparent messiness of today’s world. In contrast, RAFT offers a bold vision of creating a new, much better society – a society that builds on the existing strengths of human knowledge, skills, and relationships, whilst leaving behind those aspects of the human condition which unnecessarily limit human flourishing.

It’s an ambitious vision. But as I explain in the main chapters of the book, there are many solutions and tools at hand, ready to energise and empower a growing coalition of activists, engineers, social entrepreneurs, researchers, creatives, humanitarians, and more.

These solutions can help us all to transcend our present-day preoccupations, our unnecessary divisions, our individual agendas, and our inherited human limitations.

Going forwards, these solutions mean that, with wise choices, constraints which have long overshadowed human existence can soon be lifted:

  • Instead of physical decay and growing age-related infirmity, an abundance of health and longevity awaits us.
  • Instead of collective foolishness and blinkered failures of reasoning, an abundance of intelligence and wisdom is within our reach.
  • Instead of morbid depression and emotional alienation – instead of envy and egotism – we can achieve an abundance of mental and spiritual wellbeing.
  • Instead of a society laden with deception, abuses of power, and divisive factionalism, we can embrace an abundance of democracy – a flourishing of transparency, access, mutual support, collective insight, and opportunity for all, with no one left behind.

For more information about the book and its availability, see here. I’ll be interested to hear your feedback!

14 June 2019

Fully Automated Luxury Communism: a timely vision

I find myself in a great deal of agreement with Fully Automated Luxury Communism (“FALC”), the provocative but engaging book by Novara Media Co-Founder and Senior Editor Aaron Bastani.

It’s a book that’s going to change the conversation about the future.

It starts well, with six short vignettes, “Six characters in search of a future”. Then it moves on, with the quality consistently high, to sections entitled “Chaos under heaven”, “New travellers”, and “Paradise found”. Paradise! Yes, that’s the future which is within our grasp. It’s a future in which, as Bastani says, people will “lead fuller, expanded lives, not diminished ones”:

The comment about “diminished lives” is a criticism of at least some parts of the contemporary green movement:

To the green movement of the twentieth century this is heretical. Yet it is they who, for too long, unwisely echoed the claim that ‘small is beautiful’ and that the only way to save our planet was to retreat from modernity itself. FALC rallies against that command, distinguishing consumption under fossil capitalism – with its commuting, ubiquitous advertising, bullshit jobs and built-in obsolescence – from pursuing the good life under conditions of extreme supply. Under FALC we will see more of the world than ever before, eat varieties of food we have never heard of, and lead lives equivalent – if we so wish – to those of today’s billionaires. Luxury will pervade everything as society based on waged work becomes as much a relic of history as the feudal peasant and medieval knight.

The book is full of compelling turns of phrase that made me think to myself, “I wish I had thought of saying that”. They are phrases that are likely to be heard increasingly often from now on.

The book also contains ideas and examples that I have myself used on many occasions in my own writing and presentation over the years. Indeed, the vision and analysis in FALC has a lot in common with the vision and analysis I have offered, most recently in Sustainable Superabundance, and, in more depth, in my earlier book Transcending Politics.

Four steps in the analysis

In essence, FALC sets out a four-step problem-response-problem-response sequence:

  1. A set of major challenges facing contemporary society – challenges which undermine any notion that social development has somehow already reached a desirable “end of history”
  2. A set of technological innovations, which Bastani calls the “Third Disruption”, with the potential not only to solve the severe challenges society is facing, but also to significantly improve human life
  3. A set of structural problems with the organisation of the economy, which threaten to frustrate and sabotage the positive potential of the Third Disruption
  4. A set of changes in attitude – and political programmes to express these changes – that will allow, after all, the entirety of society to fully benefit from the Third Disruption, and attain the “luxury” paradise the book describes.

In more detail:

First, Bastani highlights five challenges that, in combination, pose (as he puts it) “threats whose scale is civilisational”:

  • Growing resource scarcity – particularly for energy, minerals and fresh water
  • Accelerating climate change and other consequences of global warming
  • Societal aging, as life expectancy increases and birth rates concurrently fall, invalidating the assumptions behind pension schemes and, more generally, the social contract
  • A growing surplus of global poor who form an ever-larger ‘unnecessariat’ (people with no economic value to contribute)
  • A new machine age which will herald ever-greater technological unemployment as progressively more physical and cognitive labour is performed by machines, rather than humans.

Second, Bastani points to a series of technological transformations that comprise an emerging “Third Disruption” (following the earlier disruptions of the Agricultural and Industrial Revoutions). These transformations apply information technology to fields such as renewable energy, food production, resource management (including asteroid mining), healthcare, housing, and education. The result of these transformations could (“if we want it”, Bastani remarks) be a society characterised by the terms “post-scarcity” and “post-work”.

Third, this brings us to the deeper problem, namely the way society puts too much priority on the profit motive.

Transcending capitalism

The economic framework known as capitalism has generated huge amounts of innovation in products and services. These innovations have taken place because entrepreneurs have been motivated to create and distribute new items for exchange and profit. But in circumstances when profits would be small, there’s less motivation to create the goods and services. To the extent that goods and services are nowadays increasingly dependent on information, this poses a problem, since information involves no intrinsic costs when it is copied from one instance to another.

Increasingly, what’s special about a product isn’t the materials from which it is composed, but the set of processes (that is, information) used to manipulate those material to create the product. Increasingly, what’s special about a service isn’t the tacit skills of the people delivering that service, but the processes (that is, information) by which any reasonably skilled person can be trained to deliver that service. All this leads to pressures for the creation of “artificial scarcity” that prohibits the copying of certain types of information.

The fact that goods and services become increasingly easy to duplicate should be seen as a positive. It should mean lower costs all round. It should mean that more people can access good quality housing, good quality education, good quality food, and good quality clean energy. It’s something that society should welcome enthusiastically. However, since profits are harder to achieve in these circumstances, many business leaders (and the hangers-on who are dependent on these business leaders) wish to erect barriers and obstacles anew. Rather than embracing post-scarcity, they wish to extent the prevalence of scarcity.

This is just one example of the “market failures” which can arise from unfettered capitalism. In my own book Sustainable Superabundance, five of the twelve chapters end with a section entitled “Beyond the profit motive”. It’s not that I view the profit motive as inherently bad. Far from it. Instead, it’s that there are many problems in letting the profit motive dominate other motivations. That’s why we need to look beyond the profit motive.

In much the same way, Bastani recognises capitalism as an essential precursor to the fully automated luxury communism he foresees. Here, as in much of his thinking, he draws inspiration from the writing of Karl Marx. Bastani notes that,

In contrast to his portrayal by critics, Marx was often lyrical about capitalism. His belief was that despite its capacity for exploitation, its compulsion to innovate – along with the creation of a world market – forged the conditions for social transformation.

Bastani quotes Marx writing as follows in 1848:

The bourgeoisie … has been the first to show what man’s activity can bring about. It has accomplished wonders far surpassing Egyptian pyramids, Roman aqueducts, and Gothic cathedrals; it has conducted expeditions that put in the shade all former Exoduses of nations and crusades.

By the way, don’t be put off by the word “communism” in the book’s title. There’s no advocacy here of a repeat of what previous self-declared communist regimes have done. Communism was not possible until the present time, since it depends upon technology having advanced to a sufficiently advanced state. Bastani explains it as follows:

While it is true that a number of political projects have labelled themselves communist over the last century, the aspiration was neither accurate nor – as we will go on to see – technologically possible. ‘Communism’ is used here for the benefit of precision; the intention being to denote a society in which work is eliminated, scarcity replaced by abundance and where labour and leisure blend into one another. Given the possibilities arising from the Third Disruption, with the emergence of extreme supply in information, labour, energy and resources, it should be viewed not only as an idea adequate to our time but impossible before now.

And to emphasise the point:

FALC is not the communism of the early twentieth century, nor will it be delivered by storming the Winter Palace.

The technologies needed to deliver a post-scarcity, post-work society – centred around renewable energy, automation and information – were absent in the Russian Empire, or indeed anywhere else until the late 1960s…

Creating communism before the Third Disruption is like creating a flying machine before the Second. You could conceive of it – and indeed no less a genius than Leonardo Da Vinci did precisely that – but you could not create it. This was not a failure of will or of intellect, but simply an inevitability of history.

Marx expected a transformation from capitalism to communism within his own lifetime. He would likely have been very surprised at the ability of capitalism to reinvent itself in the face of the many challenges and difficulties it has faced in subsequent decades. Marx’s lack of accurate prediction about the forthcoming history of capitalism is one factor people use to justify their disregard for Marxism. The question, however, is whether his analysis was merely premature rather than completely wrong. Bastani argues for the former point of view. The internal tensions of a profit-led society have caused a series of large financial and economic crashes, but have not, so far, led to an effective transition away from profit-seeking to abundance-seeking. However, Bastani argues, the stakes are nowadays so high, that continued pursuit of profits-at-all-costs cannot continue.

This brings us to the fourth phase of the argument – the really critical one. If there are problems with capitalism, what is to be done? Rather than storming any modern-day Winter Palace, where should a fervour for change best be applied?

Solutions

Bastani’s answer starts by emphasising that the technologies of the Third Disruption, by themselves, provide no guarantee of a move to a society with ample abundance. Referring to the laws of technology of Melvin Kranzberg, Bastani observes that

How technology is created and used, and to whose advantage, depends on the political, ethical and social contexts from which it emerges.

In other words, ideas and structures play a key role. To increase the chances of optimal benefits from the technologies of the Third Disruption, ideas prevalent in society will need to change.

The first change in ideas is a different attitude towards one of the dominant ideologies of our time, sometimes called neoliberalism. Bastani refers at various points to “market fundamentalism”. This is the idea that free pursuit of profits will inevitably result in the best outcome for society as a whole – that the free market is the best tool to organise the distribution of resources. In this viewpoint, regulations should be resisted, where they interfere with the ability of businesses to offer new products and services to the market. Workers’ rights should be resisted too, since they will interfere with the ability of businesses to lower wages and reassign tasks overseas. And so on.

Bastani has a list of examples of gross social failures arising from pursuit of neoliberalism. This includes the collapse in 2018 of Carillion, the construction and facilities management company. Bastani notes:

With up to 90 per cent of Carillion’s work subcontracted out, as many as 30,000 businesses faced the consequences of its ideologically driven mismanagement. Hedge funds in the City, meanwhile, made hundreds of millions from speculating on its demise.

Another example is the tragedy of the 2017 fire at the 24-storey Grenfell Tower in West London, in which 72 people perished:

The neoliberal machine has human consequences that go beyond spreadsheets and economic data. Beyond, even, in-work poverty and a life defined by paying ever higher rents to wealthy landlords and fees to company shareholders. As bad as those are they pale beside its clearest historic expression in a generation: the derelict husk of Grenfell Tower…

A fire broke which would ravage the building in a manner not seen in Britain for decades. The primary explanation for its rapid, shocking spread across the building – finished in 1974 and intentionally designed to minimise the possibility of such an event – was the installation of flammable cladding several years earlier, combined with poor safety standards and no functioning sprinklers – all issues highlighted by the residents’ Grenfell Action Group before the fire.

The cladding itself, primarily composed of polyethylene, is as flammable as petroleum. Advances in material science means we should be building homes that are safer, and more efficient, than ever before. Instead a cut-price approach to housing the poor prevails, prioritising external aesthetics for wealthier residents. In the case of Grenfell that meant corners were cut and lives were lost. This is not a minor political point and shows the very real consequences of ‘self-regulation’.

Bastani is surely right that greater effort is needed to ensure everyone understands the various failure modes of free markets. A better appreciation is overdue of the positive role that well-designed regulations can play in ensuring greater overall human flourishing in the face of corporations that would prefer to put their priorities elsewhere. The siren calls of market fundamentalism need to be resisted.

I would add, however, that a different kind of fundamentalism needs to be resisted and overcome too. This is anti-market fundamentalism. As I wrote in the chapter “Markets and fundamentalists” in Transcending Politics,

Anti-market fundamentalists see the market system as having a preeminently bad effect on the human condition. The various flaws with free markets… are so severe, say these critics, that the most important reform to pursue is to dismantle the free market system. That reform should take a higher priority than any development of new technologies – AI, genetic engineering, stem cell therapies, neuro-enhancers, and so on. Indeed, if these new technologies are deployed whilst the current free market system remains in place, it will, say these critics, make it all the more likely that these technologies will be used to oppress rather than liberate.

I believe that both forms of fundamentalism (pro-market and anti-market) need to be resisted. I look forward to wiser management of the market system, rather than dismantling it. In my view, key to this wise management is the reform and protection of a number of other social institutions that sit alongside markets – a free press, free judiciary, independent regulators, and, yes, independent politicians.

I share the view of political scientists Jacob S. Hacker and Paul Pierson, articulated in their fine 2016 book American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity, that the most important social innovation of the 20th century was the development of the mixed economy. In a mixed economy, effective governments work alongside the remarkable capabilities of the market economy, steering it and complementing it. Here’s what Hacker and Pierson have to say about the mixed economy:

The mixed economy spread a previously unimaginable level of broad prosperity. It enabled steep increases in education, health, longevity, and economic security.

These writers explain the mixed economy by an elaboration of Adam Smith’s notion of “the invisible hand”:

The political economist Charles Lindblom once described markets as being like fingers: nimble and dexterous. Governments, with their capacity to exercise authority, are like thumbs: powerful but lacking subtlety and flexibility. The invisible hand is all fingers. The visible hand is all thumbs. Of course, one wouldn’t want to be all thumbs. But one wouldn’t want to be all fingers either. Thumbs provide countervailing power, constraint, and adjustments to get the best out of those nimble fingers.

The characterisation by Hacker and Pierson of the positive role of government is, to my mind, spot on correct. It’s backed up in their book by lots of instructive episodes from American history, going all the way back to the revolutionary founders:

  • Governments provide social coordination of a type that fails to arise by other means of human interaction, such as free markets
  • Markets can accomplish a great deal, but they’re far from all-powerful. Governments ensure that suitable investment takes place of the sort that would not happen, if it was left to each individual to decide by themselves. Governments build up key infrastructure where there is no short-term economic case for individual companies to invest to create it
  • Governments defend the weak from the powerful. They defend those who lack the knowledge to realise that vendors may be on the point of selling them a lemon and then beating a hasty retreat. They take actions to ensure that social free-riders don’t prosper, and that monopolists aren’t able to take disproportionate advantage of their market dominance
  • Governments prevent all the value in a market from being extracted by forceful, well-connected minority interests, in ways that would leave the rest of society impoverished. They resist the power of “robber barons” who would impose numerous tolls and charges, stifling freer exchange of ideas, resources, and people. Therefore governments provide the context in which free markets can prosper (but which those free markets, by themselves, could not deliver).

It’s a deeply troubling development that the positive role of enlightened government is something that is poorly understood in much of contemporary public discussion. Instead, as a result of a hostile barrage of ideologically-driven misinformation, more and more people are calling for a reduction in the scope and power of government. That tendency – the tendency towards market fundamentalism – urgently needs to be resisted. But at the same time, we also need to resist the reverse tendency – the tendency towards anti-market fundamentalism – the tendency to belittle the latent capabilities of free markets.

To Bastani’s credit, he avoids advocating any total government control over planning of the economy. Instead, he offers praise for Eastern European Marxist writers such as Michał Kalecki, Włodzimierz Brus, and Kazimierz Łaski, who advocated important roles for market mechanisms in the approach to the communist society in which they all believed. Bastani comments,

[These notions were] expanded further in 1989 with Brus and Łaski claiming that under market socialism, publicly owned firms would have to be autonomous – much as they are in market capitalist systems – and that this would necessitate a socialised capital market… Rather than industrial national monoliths being lauded as the archetype of economic efficiency, the authors argued for a completely different kind of socialism declaring, ‘The role of the owner-state should be separated from the state as an authority in charge of administration … (enterprises) have to become separated not only from the state in its wider role but also from one another.’

Bastani therefore supports a separation of two roles:

  • The political task of establishing the overall direction and framework for the development of the economy
  • The operational task of creating goods and services within that framework – a task that may indeed utilise various market mechanisms.

Key in the establishment of the overall direction is to supersede society’s reliance on the GDP measure. Bastani is particularly good in his analysis of the growing shortcomings of GDP (Gross Domestic Product), and on what must be included in its replacement, which he calls an “Abundance Index”:

Initially such an index would integrate CO2 emissions, energy efficiency, the falling cost of energy, resources and labour, the extent to which UBS [Universal Basic Services] had been delivered, leisure time (time not in paid employment), health and lifespan, and self-reported happiness. Such a composite measure, no doubt adapted to a variety of regional and cultural differences, would be how we assess the performance of post-capitalist economies in the passage to FALC. This would be a scorecard for social progress assessing how successful the Third Disruption is in serving the common good.

Other policies Bastani recommends in FALC include:

  • Revised priorities for central banks – so that they promote increases of the Abundance Index, rather than simply focusing on the control of inflation
  • Step by step increases in UBS (Universal Basic Services) – rather than the UBI (Universal Basic Income) that is often advocated these days
  • Re-localisation of economies through what Bastani calls “progressive procurement and municipal protectionism”.

But perhaps the biggest recommendation Bastani makes is for the response to society’s present political issues to be a “populist” one.

Populism and its dangers

I confess that the word “populist” made me anxious. I worry about groundswell movements motivated by emotion rather than clear-sightedness. I worry about subgroups of citizens who identify themselves as “the true people” (or “the real people”) and who take any democratic victory as a mandate for them to exclude any sympathy for minority viewpoints. (“You lost. Get over it!”) I worry about demagogues who rouse runaway emotional responses by scapegoating easy targets (such as immigrants, overseas governments, transnational organisations, “experts”, “the elite”, or culturally different subgroups).

In short, I was more worried by the word “populist” than the word “communist”.

As it happens – thankfully – that’s different from the meaning of “populist” that Bastani has in mind. He writes,

For the kind of change required, and for it to last in a world increasingly at odds with the received wisdom of the past, a populist politics is necessary. One that blends culture and government with ideas of personal and social renewal.

He acknowledges that some thinkers will disagree with this recommendation:

Others, who may agree about the scale and even urgent necessity of change, will contend that such a radical path should only be pursued by a narrow technocratic elite. Such an impulse is understandable if not excusable; or the suspicion that democracy unleashes ‘the mob’ is as old as the idea itself. What is more, a superficial changing of the guard exclusively at the level of policy-making is easier to envisage than building a mass political movement – and far simpler to execute as a strategy. Yet the truth is any social settlement imposed without mass consent, particularly given the turbulent energies unleashed by the Third Disruption, simply won’t endure.

In other words, voters as a whole must be able to understand how the changes ahead, if well managed, will benefit everyone, not just in a narrow economic sense, but in the sense of liberating people from previous constraints.

I have set out similar ideas, under the term “superdemocracy”, described as follows:

A renewal of democracy in which, rather than the loudest and richest voices prevailing, the best insights of the community are elevated and actioned…

The active involvement of the entire population, both in decision-making, and in the full benefits of [technology]…

Significantly improved social inclusion and resilience, whilst upholding diversity and liberty – overcoming human tendencies towards tribalism, divisiveness, deception, and the abuse of power.

That last proviso is critical and deserves repeating: “…overcoming human tendencies towards tribalism, divisiveness, deception, and the abuse of power”. Otherwise, any movements that build popular momentum risk devouring themselves in time, in the way that the French Revolution sent Maximilien Robespierre to the guillotine, and the Bolshevik Revolution led to the deaths of many of the original revolutionaries following absurd show trials.

You’ll find no such proviso in FALC. Bastani writes,

Pride, greed and envy will abide as long as we do.

He goes on to offer pragmatic advice,

The management of discord between humans – the essence of politics – [is] an inevitable feature of any society we share with one another.

Indeed, that is good advice. We all need to become better at managing discord. However, writing as a transhumanist, I believe we can, and must, do better. The faults within human nature are something which the Third Disruption (to use Bastani’s term) will increasingly allow us to address and transcend.

Consider the question: Is it possible to significantly improve politics, over the course of, say, the next dozen years, without first significantly improving human nature?

Philosophies of politics can in principle be split into four groups, depending on the answer they give to that question:

  1. We shouldn’t try to improve human nature; that’s the route to hell
  2. We can have a better politics without any change in human nature
  3. Improving human nature will turn out to be relatively straightforward; let’s get cracking
  4. Improving human nature will be difficult but is highly desirable; we need to carefully consider the potential scenarios, with an open mind, and then make our choices.

For the avoidance of doubt, the fourth of these positions is the one I advocate. In contrast, I believe Bastani would favour the second answer – or maybe the first.

Transcending populism

(The following paragraphs are extracted from the chapter “Humans and superhumans” of my book Transcending Politics.)

We humans are sometimes angelic, yet sometimes diabolic. On occasion, we find ways to work together on a transcendent purpose with wide benefits. But on other occasions, we treat each other abominably. Not only do we go to war with each other, but our wars are often accompanied by hideous so-called “war crimes”. Our religious crusades, whilst announced in high-minded language, have involved the subjugation or extermination of hundreds of thousands of members of opposing faiths. The twentieth century saw genocides on a scale never before experienced. For a different example of viciousness, the comments attached to YouTube videos frequently show intense hatred and vitriol.

As technology puts more power in our hands, will we become more angelic, or more diabolic? Probably both, at the same time.

A nimbleness of mind can coincide with a harshness of spirit. Just because someone has more information at their disposal, that’s no guarantee the information will be used to advance beneficial initiatives. Instead, that information can be mined and contoured to support whatever course of action someone has already selected in their heart.

Great intelligence can be coupled with great knowledge, for good but also for ill. The outcome in some sorry cases is greater vindictiveness, greater manipulation, and greater enmity. Enhanced cleverness can make us experts in techniques to suppress inconvenient ideas, to distort inopportune findings, and to tarnish independent thinkers. We can find more devious ways to mislead and deceive people – and, perversely, to mislead and deceive ourselves. In this way, we could create the mother of all echo chambers. It would take only a few additional steps for obsessive human superintelligence to produce unprecedented human malevolence.

Transhumanists want to ask: can’t we find a way to alter the expression of human nature, so that we become less likely to use our new technological capabilities for malevolence, and more likely to use them for benevolence? Can’t we accentuate the angelic, whilst diminishing the diabolic?

To some critics, that’s an extremely dangerous question. If we mess with human nature, they say, we’ll almost certainly make things worse rather than better.

Far preferable, in this analysis, is to accept our human characteristics as a given, and to evolve our social structures and cultural frameworks with these fixed characteristics in mind. In other words, our focus should be on the likes of legal charters, restorative justice, proactive education, multi-cultural awareness, and effective policing.

My view, however, is that these humanitarian initiatives towards changing culture need to be complemented with transhumanist initiatives to alter the inclinations inside the human soul. We need to address nature at the same time as we address nurture. To do otherwise is to unnecessarily limit our options – and to make it more likely that a bleak future awaits us.

The good news is that, for this transhumanist task, we can take advantage of a powerful suite of emerging new technologies. The bad news is that, like all new technologies, there are risks involved. As these technologies unfold, there will surely be unforeseen consequences, especially when different trends interact in unexpected ways.

Transhumanists have long been well aware of the risks in changing the expression of human nature. Witness the words of caution baked deep into the Transhumanist Declaration. But these risks are no reason for us to abandon the idea. Instead, they are a reason to exercise care and judgement in this project. Accepting the status quo, without seeking to change human nature, is itself a highly risky approach. Indeed, there are no risk-free options in today’s world. If we want to increase our chances of reaching a future of sustainable abundance for all, without humanity being diverted en route to a new dark age, we should leave no avenue unexplored.

Transhumanists are by no means the first set of thinkers to desire positive changes in human nature. Philosophers, religious teachers, and other leaders of society have long called for humans to overcome the pull of “attachment” (desire), self-centredness, indiscipline, “the seven deadly sins” (pride, greed, lust, envy, gluttony, wrath, and sloth), and so on. Where transhumanism goes beyond these previous thinkers is in highlighting new methods that can now be used, or will shortly become available, to assist in the improvement of character.

Collectively these methods can be called “cognotech”. They will boost our all-round intelligence: emotional, rational, creative, social, spiritual, and more. Here are some examples:

  • New pharmacological compounds – sometimes called “smart drugs”
  • Gentle stimulation of the brain by a variety of electromagnetic methods – something that has been trialled by the US military
  • Alteration of human biology more fundamentally, by interventions at the genetic, epigenetic, or microbiome level
  • Vivid experiences within multi-sensory virtual reality worlds that bring home to people the likely consequences of their current personal trajectories (from both first-person and third-person points of view), and allow them to rehearse changes in attitude
  • The use of “intelligent assistance” software that monitors our actions and offers us advice in a timely manner, similar to the way that a good personal friend will occasionally volunteer wise counsel; intelligent assistants can also strengthen our positive characteristics by wise selection of background music, visual imagery, and “thought for the day” aphorisms to hold in mind.

Technological progress can also improve the effectiveness of various traditional methods for character improvement:

  • The reasons why meditation, yoga, and hypnosis can have beneficial results are now more fully understood than before, enabling major improvements in the efficacy of these practices
  • Education of all sorts can be enhanced by technology such as interactive online video courses that adapt their content to the emerging needs of each different user
  • Prompted by alerts generated by online intelligent assistants, real-world friends can connect at critical moments in someone’s life, in order to provide much-needed personal support
  • Information analytics can resolve some of the long-running debates about which diets – and which exercise regimes – are the ones that will best promote all-round health for given individuals.

The technoprogressive feedback cycle

One criticism of the initiative I’ve just outlined is that it puts matters the wrong way round.

I’ve been describing how individuals can, with the aid of technology as well as traditional methods, raise themselves above their latent character flaws, and can therefore make better contributions to the political process (either as voters or as actual politicians). In other words, we’ll get better politics as a result of getting better people.

However, an opposing narrative runs as follows. So long as our society is full of emotional landmines, it’s a lot to expect people to become more emotionally competent. So long as we live in a state of apparent siege, immersed in psychological conflict, it’s a big ask for people to give each other the benefit of the doubt, in order to develop new bonds of trust. Where people are experiencing growing inequality, a deepening sense of alienation, a constant barrage of adverts promoting consumerism, and an increasing foreboding about an array of risks to their wellbeing, it’s not reasonable to urge them to make the personal effort to become more compassionate, thoughtful, tolerant, and open-minded. They’re more likely to become angry, reactive, intolerant, and closed-minded. Who can blame them? Therefore – so runs this line of reasoning – it’s more important to improve the social environment than to urge the victims of that social environment to learn to turn the other cheek. Let’s stop obsessing about personal ethics and individual discipline, and instead put every priority on reducing the inequality, alienation, consumerist propaganda, and risk perception that people are experiencing. Instead of fixating upon possibilities for technology to rewire people’s biology and psychology, let’s hurry up and provide a better social safety net, a fairer set of work opportunities, and a deeper sense that “we’re all in this together”.

I answer this criticism by denying that it’s a one-way causation. We shouldn’t pick just a single route of influence – either that better individuals will result in a better society, or that a better society will enable the emergence of better individuals. On the contrary, there’s a two way flow of influence.

Yes, there’s such a thing as psychological brutalisation. In a bad environment, the veneer of civilisation can quickly peel away. Youngsters who would, in more peaceful circumstances, instinctively help elderly strangers to cross the road, can quickly degrade in times of strife into obnoxious, self-obsessed bigots. But that path doesn’t apply to everyone. Others in the same situation take the initiative to maintain a cheery, contemplative, constructive outlook. Environment influences the development of character, but doesn’t determine it.

Accordingly, I foresee a positive feedback cycle:

  • With the aid of technological assistance, more people – whatever their circumstances – will be able to strengthen the latent “angelic” parts of their human nature, and to hold in check the latent “diabolic” aspects
  • As a result, at least some citizens will be able to take wiser policy decisions, enabling an improvement in the social and psychological environment
  • The improved environment will, in turn, make it easier for other positive personal transformations to occur – involving a larger number of people, and having a greater impact.

One additional point deserves to be stressed. The environment that influences our behaviour involves not just economic relationships and the landscape of interpersonal connections, but also the set of ideas that fill our minds. To the extent that these ideas give us hope, we can find extra strength to resist the siren pull of our diabolic nature. These ideas can help us focus our attention on positive, life-enhancing activities, rather than letting our minds shrink and our characters deteriorate.

This indicates another contribution of transhumanism to building a comprehensively better future. By painting a clear, compelling image of sustainable abundance, credibly achievable in just a few decades, transhumanism can spark revolutions inside the human heart.

That potential contribution brings us back to similar ideas in FALC. Bastani wishes a populist transformation of the public consciousness, which includes inspiring new ideas for how everyone can flourish in a post-scarcity post-work society.

I’m all in favour of inspiring new ideas. The big question, of course, is whether these new ideas skate over important omissions that will undermine the whole project.

Next steps

I applaud FALC for the way it advances serious discussion about a potentially better future – a potentially much better future – that could be attained in just a few decades.

But just as FALC indicates a reason why communism could not be achieved before the present time, I want to indicate a reason why the FALC project could likewise fail.

Communism was impossible, Bastani says, before the technologies of the Third Disruption provided the means for sufficient abundance of energy, food, education, material goods, and so on. In turn, my view is that communism will be impossible (or unlikely) without attention being paid to the proactive transformation of human nature.

We should not underestimate the potential of the technologies of the Third Disruption. They won’t just provide more energy, food, education, and material goods. They won’t just enable people to have healthier bodies throughout longer lifespans. They will also enable all of us to attain better levels of mental and emotional health – psychological and spiritual wellbeing. If we want it.

That’s why the Abundance 2035 goals on which I am presently working contain a wider set of ambitions than feature in FALC. For example, these goals include aspirations that, by 2035,

  • The fraction of people with mental health problems will be 1% or less
  • Voters will no longer routinely assess politicians as self-serving, untrustworthy, or incompetent.

To join a discussion about the Abundance 2035 goals (and about a set of interim targets to be achieved by 2025), check out this London Futurists event taking place at Newspeak House on Monday 1st July.

To hear FALC author Aaron Bastani in discussion of his ideas, check out this Virtual Futures event, also taking place at Newspeak House, on Tuesday 25th June.

Finally, for an all-round assessment of the relevance of transhumanism to building a (much) better future, check out TransVision 2019, happening at Birkbeck College on the weekend of 6-7 July, where 22 different speakers will be sharing their insights.

7 June 2019

Feedback on what goals the UK should have in mind for 2035

Filed under: Abundance, BHAG, politics, TPUK, vision — Tags: , , , , — David Wood @ 1:56 pm

Some political parties are preoccupied with short-term matters.

It’s true that many short-term matters demand attention. But we need to take the time to consider, as well, some important longer-term risks and issues.

If we give these longer-term matters too little attention, we may wake up one morning and bitterly regret our previous state of distraction. By then, we may have missed the chance to avoid an enormous setback. It could also be too late to take advantage of what previously was a very positive opportunity.

For these reasons, the Transhumanist Party UK seeks to raise the focus of a number of transformations that could take place in the UK, between now and 2035.

Rather than having a manifesto for the next, say, five years, the Party is developing a vision for the year 2035 – a vision of much greater human flourishing.

It’s a vision in which there will be enough for everyone to have an excellent quality of life. No one should lack access to healthcare, shelter, nourishment, information, education, material goods, social engagement, free expression, or artistic endeavour.

The vision also includes a set of strategies by which the current situation (2019) could be transformed, step by step, into the desired future state (2035).

Key to these strategies is for society to take wise advantage of the remarkable capabilities of twenty-first century science and technology: robotics, biotech, neurotech, greentech, collabtech, artificial intelligence, and much more. These technologies can provide all of us with the means to live better than well – to be healthier and fitter than ever before; nourished emotionally and spiritually as well as physically; and living at peace with ourselves, the environment, and our neighbours both near and far.

Alongside science and technology, there’s a vital role that politics needs to play:

  • Action to encourage the kind of positive collaboration which might otherwise be undermined by free-riders
  • Action to adjust the set of subsidies, incentives, constraints, and legal frameworks under which we all operate
  • Action to protect the citizenry as a whole from the abuse of power by any groups with monopoly or near-monopoly status
  • Action to ensure that the full set of “externalities” (both beneficial and detrimental) of market transactions are properly considered, in a timely manner.

To make this vision more concrete, the Party wishes to identify a set of specific goals for the UK for the year 2035. At present, there are 16 goals under consideration. These goals are briefly introduced in a video:

As you can see, the video invites viewers to give their feedback, by means of an online survey. The survey collects opinions about the various goals: are they good as they stand? Too timid? Too ambitious? A bad idea? Uninteresting? Or something else?

The survey also invites ideas about other goals that should perhaps be added into the mix.

Since the survey has been launched, feedback has been accumulating. I’d like to share some of that feedback now, along with some of my own personal responses.

The most unconditionally popular goal so far

Of the 16 goals proposed, the one which has the highest number of responses “Good as it stands” is Goal 4, “Thanks to innovations in recycling, manufacturing, and waste management, the UK will be zero waste, and will have no adverse impact on the environment.”

(To see the rationale for each goal, along with ideas on measurement, the current baseline, and the strategy to achieve the goal, see the document on the Party website.)

That goal has, so far, been evaluated as “Good as it stands” by 84% of respondents.

One respondent gave this comment:

Legislation and Transparency are equally as important here, to gain the public’s trust that there is actual quantified benefits from this, or rather to de-abstractify recycling and make it more tangible and not just ‘another bin’

My response: succeeding with this goal will involve more than the actions of individuals putting materials into different recycling bins.

Research from the Stockholm Resilience Centre has identified nine “planetary boundaries” where human activity is at risk of pushing the environment into potentially very dangerous states of affairs.

For each of these planetary boundaries, the same themes emerge:

  • Methods are known that would replace present unsustainable practices with sustainable ones.
  • By following these methods, life would be plentiful for all, without detracting in any way from the potential for ongoing flourishing in the longer term.
  • However, the transition from unsustainable to sustainable practices requires overcoming very significant inertia in existing systems.
  • In some cases, what’s also required is vigorous research and development, to turn ideas for new solutions into practical realities.
  • Unfortunately, in the absence of short-term business cases, this research and development fails to receive the investment it requires.

In each case, the solution also follows the same principles. Society as a whole needs to agree on prioritising research and development of various solutions. Society as a whole needs to agree on penalties and taxes that should be applied to increasingly discourage unsustainable practices. And society as a whole needs to provide a social safety net to assist those peoples whose livelihoods are adversely impacted by these changes.

Left to its own devices, the free market is unlikely to reach the same conclusions. Instead, because it fails to assign proper values to various externalities, the market will produce harmful results. Accordingly, these are cases when society as a whole needs to constrain and steer the operation of the free market. In other words, democratic politics needs to exert itself.

2nd equal most popular goals

The 2nd equal most popular goal is Goal 7, “There will be no homelessness and no involuntary hunger”, with 74% responses judging it “Good as it stands”. Disagreeing, 11% of respondents judged it as “Too ambitious”. Here’s an excerpt from the proposed strategy to achieve this goal:

The construction industry should be assessed, not just on its profits, but on its provision of affordable, good quality homes.

Consider the techniques used by the company Broad Sustainable Building, when it erected a 57-storey building in Changsha, capital city of Hunan province in China, in just 19 working days. That’s a rate of three storeys per day. Key to that speed was the use of prefabricated units. Other important innovations in construction techniques include 3D printing, robotic construction, inspection by aerial drones, and new materials with unprecedented strength and resilience.

Similar techniques can in principle be used, not just to generate new buildings where none presently exist, but also to refurbish existing buildings – regenerating them from undesirable hangovers from previous eras into highly desirable contemporary accommodation.

With sufficient political desire, these techniques offer the promise that prices for property over the next 16 years might follow the same remarkable downwards trajectory witnessed in many other product areas – such as TVs, LCD screens, personal computers and smartphones, kitchen appliances, home robotics kits, genetic testing services, and many types of clothing…

Finally, a proportion of cases of homelessness arise, not from shortage of available accommodation, but from individuals suffering psychological issues. This element of homelessness will be addressed by the measures reducing mental health problems to less than 1% of the population.

The other 2nd equal most popular goal is Goal 3, “Thanks to improved green energy management, the UK will be carbon-neutral”, also with 74% responses judging it “Good as it stands”. In this case, most of the dissenting opinions (16%) held that the goal is “Too timid” – namely, that carbon neutrality should be achieved before 2035.

For the record, 4th equal in this ranking, with 68% unconditional positive assessment, were:

  • Goal 6: “World-class education to postgraduate level will be freely available to everyone via online access”
  • Goal 16: “The UK will be part of an organisation that maintains a continuous human presence on Mars”

Least popular goals

At the other end of this particular spectrum, three goals are currently tied as having the least popular support in the formats stated: 32%.

This includes Goal 9, “The UK will be part of a global “open borders” community of at least 25% of the earth’s population”. One respondent gave this comment:

Seems absolutely unworkable, would require other countries to have same policy, would have to all be developed countries. Massively problematic and controversial with no link to ideology of transhumanism

And here’s another comment:

No need to work for a living, no homelessness and open borders. What can go wrong?

And yet another:

This can’t happen until wealth/resource distribution is made equitable – otherwise we’d all be crammed in Bladerunner style cities. Not a desirable outcome.

My reply is that the detailed proposal isn’t for unconditional free travel between any two countries, but for a system that includes many checks and balances. As for the relevance to transhumanism, the actual relevance is to the improvement of human flourishing. Freedom of movement opens up many new opportunities. Indeed, migration has been found to have considerable net positive effects on the UK, including productivity, public finances, cultural richness, and individuals’ well-being. Flows of money and ideas in the reverse direction also benefit the original countries of the immigrants.

Another equal bottom goal, by this ranking, is Goal 10, “Voters will no longer routinely assess politicians as self-serving, untrustworthy, or incompetent”. 26% of respondents rated this as “Too ambitious”, and 11% as “Uninteresting”.

My reply in this case is that politicians in at least some other countries have a higher reputation than in the UK. These countries include Denmark (the top of the list), Switzerland, Netherlands, Luxembourg, Norway, Finland, Sweden, and Iceland.

What’s more, a number of practices – combining technological innovation with social innovation – seem capable of increasing the level of trust and respect for politicians:

  • Increased transparency, to avoid any suspicions of hidden motivations or vested interests
  • Automated real-time fact-checking, so that politicians know any distortions of the truth will be quickly pointed out
  • Encouragement of individual politicians with high ethical standards and integrity
  • Enforcement of penalties in cases when politicians knowingly pass on false information
  • Easier mechanisms for the electorate to be able to quickly “recall” a politician when they have lost the trust of voters
  • Improvements in mental health for everyone, including politicians, thereby diminishing tendencies for dysfunctional behaviour
  • Diminished power for political parties to constrain how individual politicians express themselves, allowing more politicians to speak according to their own conscience.

A role can also be explored for regular psychometric assessment of politicians.

The third goal in this grouping of the least popular is Goal 13, “Cryonic suspension will be available to all, on point of death, on the NHS”. 26% of respondents judged this as “Too ambitious”, and 11% as “A bad idea”. One respondent commented “Why not let people die when they are ready?” and other simply wrote “Mad shit”.

It’s true that there currently are many factors that discourage people from signing up for cryonics preservation. These include costs, problems arranging transport of the body overseas to a location where the storage of bodies is legal, the perceived low likelihood of a subsequent successful reanimation, lack of evidence of reanimation of larger biological organs, dislike of appearing to be a “crank”, apprehension over tension from family members (exacerbated if family members expect to inherit funds that are instead allocated to cryopreservation services), occasional mistrust over the motives of the cryonics organisations (which are sometimes alleged – with no good evidence – to be motivated by commercial considerations), and uncertainty over which provider should be preferred.

However, I foresee a big change in the public mindset when there’s a convincing demonstration of successful reanimation of larger biological organisms or organ. What’s more, as in numerous other fields of life, costs will decline and quality increase as the total number of experiences of a product or service increases. These are known as scale effects.

Goals receiving broad support

Now let’s consider a different ranking, when the votes for “Good as it stands” and “Too timid” are added together. This indicates strong overall support for the idea of the goal, with the proviso that many respondents would prefer a more aggressive timescale.

Actually this doesn’t change the results much. Compared to the goals already covered, there’s only one new entrant in the top 5, namely at position 3, with a combined positive rating of 84%. That’s for Goal 1, “The average healthspan in the UK will be at least 90 years”. 42% rated this “Good as it stands” and another 42% rated it as “Too timid”.

For the record, top equal by this ranking were Goal 3 (74% + 16%) and Goal 4 (84% + 5%).

The only other goal with a “Too timid” rating of greater than 30% was Goal 15, “Fusion will be generating at least 1% of the energy used in the UK” (32%).

The goals most actively disliked

Here’s yet another way of viewing the data: the goals which had the largest number of “A bad idea” responses.

By this measure, the goal most actively disliked (with 21% judging it “A bad idea”) was Goal 11, “Parliament will involve a close partnership with a ‘House of AI’ (or similar) revising chamber”. One respondent commented they were “wary – AI could be Stalinist in all but name in their goal setting and means”.

My reply: To be successful, the envisioned House of AI will need the following support:

  • All algorithms used in these AI systems need to be in the public domain, and to pass ongoing reviews about their transparency and reliability
  • Opaque algorithms, or other algorithms whose model of operation remain poorly understood, need to be retired, or evolved in ways addressing their shortcomings
  • The House of AI will not be dependent on any systems owned or operated by commercial entities; instead, it will be “AI of the people, by the people, for the people”.

Public funding will likely need to be allocated to develop these systems, rather than waiting for commercial companies to create them.

The second most actively disliked goal was Goal 5, “Automation will remove the need for anyone to earn money by working” (16%). Here are three comments from respondents:

Unlikely to receive support, most people like the idea of work. Plus there’s nothing the party can do to achieve this automation, depends on tech progress. UBI could be good.

What will be the purpose of humans?

It removes the need to work because their needs are being met by…. what? Universal Basic Income? Automation by itself cuts out the need for employers to pay humans to do the work but it doesn’t by itself ensure that people’s need will be met otherwise.

I’ve written on this topic many times in the past – including in Chapter 4, “Work and purpose “of my previous book, “Transcending Politics” (audio recording available here). There absolutely are political actions which can be taken, to accelerate the appropriate technological innovations, and to defuse the tensions that will arise if the fruits of technological progress end up dramatically increasing the inequality levels in society.

Note, by the way, that this goal does not focus on bringing in a UBI. There’s a lot more to it than that.

Clearly there’s work to be done to improve the communication of the underlying ideas in this case!

Goals that are generally unpopular

For a final way of ranking the data, let’s add together the votes for “A bad idea” and “Too ambitious”. This indicates ideas which are generally unpopular, in their current form of expression.

Top of this ranking, with 42%, is Goal 8, “The crime rate will have been reduced by at least 90%”. Indeed, the 42% all judged this goal as “Too ambitious”. One comment received was

Doesn’t seem within the power of any political party to achieve this, except a surveillance state

Here’s an excerpt of the strategy proposed to address this issue:

The initiatives to improve mental health, to eliminate homelessness, and to remove the need to work to earn an income, should all contribute to reducing the social and psychological pressures that lead to criminal acts.

However, even if only a small proportion of the population remain inclined to criminal acts, the overall crime rate could still remain too high. That’s because small groups of people will be able to take advantage of technology to carry out lots of crime in parallel – via systems such as “ransomware as a service” or “intelligent malware as a service”. The ability of technology to multiply human power means that just a few people with criminal intent could give rise to large amounts of crime.

That raises the priority for software systems to be highly secure and reliable. It also raises the priority of intelligent surveillance of the actions of people who might carry out crimes. This last measure is potentially controversial, since it allows part of the state to monitor citizens in a way that could be considered deeply intrusive. For this reason, access to this surveillance data will need to be restricted to trustworthy parts of the overall public apparatus – similar to the way that doctors are trusted with sensitive medical information. In turn, this highlights the importance of initiatives that increase the trustworthiness of key elements of our national infrastructure.

On a practical basis, initiatives to understand and reduce particular types of crime should be formed, starting with the types of crime (such as violent crime) that have the biggest negative impact on people’s lives.

Second in this ranking of general unpopularity, at 37%, is Goal 13, on cryonics, already mentioned above.

Third, at 32%, is Goal 11, on the House of AI, also already mentioned.

Suggestions for other goals

Respondents offered a range of suggestions for other goals that should be included. Here are a sample, along with brief replies from me:

Economic growth through these goals needs to be quantified somehow.

I’m unconvinced that economic growth needs to be prioritised. Instead, what’s important is agreement on a more appropriate measure to replace the use of GDP. That could be a good goal to consider.

Support anti-ageing research, gene editing research, mind uploading tech, AI alignment research, legalisation of most psychedelics

In general the goals have avoided targeting technology for technology’s sake. Instead, technology is introduced only because it supports the goals of improved overall human flourishing.

I think there should be a much greater focus in our education system on developing critical thinking skills, and a more interdisciplinary approach to subjects should be considered. Regurgitating information is much less important in a technologically advanced society where all information is a few clicks away and our schooling should reflect that.

Agreed: the statement of the education goal should probably be reworded to take these points into account.

A new public transport network; Given advances in technology regarding AI and electrical vehicles, a goal on par with others you’ve listed here would be to develop a transport system to replace cars with a decentralised public transportation network, whereby ownership of cars is replaced with the use of automated vehicles on a per journey basis, thus promoting better use of resources and driving down pollution, alongside hopefully reducing vehicular incidents.

That’s an interesting suggestion. I wonder how others think about it?

Routine near-earth asteroid mining to combat earthside resource depletion.

Asteroid mining is briefly mentioned in Goal 4, on recycling and zero waste.

Overthrow of capitalism and class relations.

Ah, I would prefer to transcend capitalism than to overthrow it. I see two mirror problems in discussing the merits of free markets: pro-market fundamentalism, and anti-market fundamentalism. I say a lot more on that topic in Chapter 9, Markets and fundamentalism”, of my book “Transcending Politics”.

The right to complete freedom over our own bodies should be recognised in law. We should be free to modify our bodies and minds through e.g. implants, drugs, software, bioware, as long as there is no significant risk of harm to others.

Yes, I see the value of including such a goal. We’ll need work to explore what’s meant by “risk of harm to others”.

UK will be part of the moon-shot Human WBE [whole brain emulation] project after being successful in supporting the previous Mouse WBE moon-shot project.

Yes, that’s an interesting suggestion too. Personally I see the WBE project as being longer-term, but hey, that may change!

Achieving many of the laudable goals rests on reshaping the current system of capitalism, but that itself is not a goal. It should be.

I’m open to suggestions for wording on this, to make it measurable.

Deaths due to RTA [road traffic accidents] cut to near zero

That’s another interesting suggestion. But it may not be on the same level as some of the existing ones. I’m open to feedback here!

Next steps

The Party is very grateful for the general feedback received so far, and looks forward to receiving more!

Discussion can also take place on the Party’s Discourse, https://discourse.transhumanistparty.org.uk/. Anyone is welcome to create an account on that site and become involved in the conversations there.

Some parts of the Discourse are reserved for paid-up members of the Party. It will be these members who take the final decisions as to which goals to prioritise.

Blog at WordPress.com.