dw2

6 April 2025

Choose coordination not chaos

Filed under: Abundance, AGI, chaos, collaboration — Tags: , , — David Wood @ 10:46 pm

Note: this document is subject to change as more feedback is received. Check back for updates later. This is version 1.1c.

Preamble

A critically important task in the coming months is to inspire a growing wave of people worldwide to campaign for effective practical coordination of the governance of advanced AI. That’s as an alternative to leaving the development and deployment of advanced AI to follow its present chaotic trajectory.

The messages that need to be conveyed, understood, and acted upon, are that

  • The successful governance of advanced AI will result in profound benefit for everyone, whereas a continuation of the present chaotic state of affairs risks global catastrophe
  • The successful governance of advanced AI isn’t some impossible dream, but lies within humanity’s grasp
  • Nevertheless, real effort, real intelligence, and, yes, real coordination will be needed, so humanity can reach a world of unprecedented abundance, rather than sleepwalk into disaster.

What would have great value is a campaign slogan that conveys the above insights, and which is uplifting, unifying, easily understood, and forward-looking. The right slogan would go viral, and would galvanise people in all walks of life to take positive action.

To that end, that slogan should ideally be

  • Memorable and punchy
  • Emotionally resonant
  • Credibly audacious
  • Universally understandable
  • Open-ended enough to invite participation

To be clear, that slogan should not cause panic or despair, but should put people into a serious frame of mind.

A specific proposal

After several days of online brainstorming, in which numerous ideas were presented and discussed, I now present what I consider the best option so far.

Choose coordination not chaos, so AI brings abundance for all

If there is a need for a three-word version of this, options include

  • Coordination not chaos
  • Choose AI abundance

Of course, what’s needed isn’t just a standalone slogan. Therefore, please find here also a sample illustrative image and, most important, a set of talking points to round out the concept.

About the image

Here’s the commentary by ChatGPT, when asked to suggest an image to illustrate this campaign slogan:

Concept: Two contrasting futures from a single branching path.

Scene:

  • A wide landscape split into two diverging paths or realms — one vibrant and coordinated, the other chaotic and fragmented.
  • In the coordinated half:
    • A harmonious world — sustainable cities, diverse communities collaborating with AI, lush green spaces, clean tech, and open exchanges of knowledge and creativity.
    • Subtle signs of AI woven into infrastructure: responsive lighting, robotic assistants, AI-powered transport.
  • In the chaotic half:
    • A fractured world — disconnected enclaves, pollution, conflict, neglected tech, and isolated individuals overwhelmed by noise or misinformation.
    • AI appears uncontrolled — surveillance drones, malfunctioning robots, or broken screens.

Central focus:

  • A group of people at the fork in the path, pointing and stepping toward the coordinated future, with calm, confident AI assistants guiding the way.

(Aside: although the actual image produced arguably needs more work, the concept described by ChatGPT is good. And it’s reassuring that the slogan, by itself, produced a flow of ideas resonant with the intended effect.)

Talking points

The talking points condensed to a single slide:

And now in more detail:

1. Humanity’s superpower: coordination

Humanity’s most important skill is sometimes said to be our intelligence – our ability to understand, and to make plans in order to achieve specific outcomes.

But another skill that’s at least as important is our ability to coordinate, that is, our ability to:

  • Share insights with each other
  • Operate in teams where people have different skills
  • Avoid needless conflict
  • Make and uphold agreements
  • Accept individual constraints on our action, with the expectation of experiencing greater freedom overall.

Coordination may be informal or formal. It can be backed up by shared narratives and philosophies, by legal systems, by the operation of free markets, by councils of elders, and by specific bodies set up to oversee activities at local, regional, or international levels.

Here are some examples of types of agreements on individual constraints for shared mutual benefit:

  • Speed limits for cars, to reduce the likelihood of dangerous accidents
  • Limits on how much alcohol someone can drink before taking charge of a car
  • Requirements to maintain good hygiene during food preparation
  • Requirements to assess the safety of a new pharmaceutical before deploying it widely
  • Prohibitions against advertising that misleads consumers into buying faulty goods
  • Rules preventing over-fishing, or the overuse of shared “commons” resources
  • Rules of various sports and games – and agreed sanctions on any cheaters
  • Prohibitions against politicians misleading parliaments – and agreed sanctions on any cheaters
  • Prohibitions against the abuse of children
  • Rules governing the conduct of soldiers – which apply even in times of war
  • Restrictions on the disposal of waste
  • Rules governing ownership of dangerous breeds of dog
  • Rules governing the spread of dangerous materials, such as biohazards

Note that coordination is often encouraging rather than restrictive. This includes

  • Prizes and other explicit incentives
  • Implicit rewards for people with good reputation
  • Market success for people with good products and services

The fact that specific coordination rules and frameworks have their critics doesn’t mean that the whole concept of coordination should be rejected. It just means that we need to keep revising our coordination processes. That is, we need to become better at coordinating.

2. Choosing coordination, before chaos ensues

When humanity uncovers new opportunities, it can take some time to understand the implications and to create or update the appropriate coordination rules and frameworks for these opportunities:

  • When settlers on the island of Mauritius discovered the dodo – a large, flightless bird – they failed to put in place measures to prevent that bird becoming extinct only a few decades later
  • When physicists discovered radioactivity, it took some time to establish processes to reduce the likelihood that researchers would develop cancer due to overexposure to dangerous substances
  • Various new weapons (such as chemical gases) were at first widely used in battle zones, before implicit and then explicit agreement was reached not to use such weapons
  • Surreptitious new doping methods used by athletes to gain extra physical advantage result, eventually, in updates to rules on monitoring and testing
  • Tobacco was widely used – and even encouraged, sometimes by medical professionals – before society decided to discourage its use (against the efforts of a formidable industry)
  • Similar measures are now being adopted, arguably too slowly, against highly addictive food products that are thought to cause significant health problems
  • New apps and online services which spread hate speech and other destabilising misinformation surely need some rules and restrictions too, though there is considerable debate over what form of governance is needed.

However, if appropriate coordination is too slow to be established, or is too weak, or exists in words only (without the backup of meaningful action against rules violators), the result can be chaos:

  • Rare animals are hunted to extinction
  • Fishing stocks are depleted to the extent that the livelihood of fishermen is destroyed
  • Economic transactions have adverse negative externalities on third parties
  • Dangerous materials, such as microplastics, spread widely in the environment
  • No-one is sure what rules apply in sports, and which rules will be enforced
  • Normal judiciary processes are subverted in favour of arbitrary “rule of the in-group”
  • Freedoms previously enjoyed by innovative new start-ups are squelched by the so-called “crony capitalism” of monopolies and cartels linked to the ruling political regime
  • Literal arms races take place, with ever-more formidable weapons being rushed into use
  • Similar races take place to bring new products to market without adequate safety testing

Groups of people who are (temporarily) faring well from the absence of restraints on their action are likely to oppose rules that alter their behaviour. That’s the experience of nearly every industry whose products or services were discovered to have dangerous side-effects, but where insiders fought hard to suppress the evidence of these dangers.

Accordingly, coordination does not arise by default. It needs explicit choice, backed up by compelling analysis, community engagement, and strong enforcement.

3. Advanced AI: the promise and the peril

AI could liberate humanity from many of our oldest problems.

Despite huge progress of many kinds over the centuries, humans still often suffer grievously on account of various aspects of our nature, our environment, our social norms, and our prevailing philosophies. Specifically, we are captive to

  • Physical decline and aging
  • Individual and collective mental blindspots and cognitive biases (“stupidity”)
  • Dysfunctional emotions that render us egotistical, depressed, obsessive, and alienated
  • Deep psychosocial tendencies toward divisiveness, xenophobia, deception, and the abuse of power

However, if developed and used wisely, advanced AI can enable rejuvenation and enhancement of our bodies, minds, emotions, social relations, and our links to the environment (including the wider cosmos):

  • AI can accelerate progress with nanotech, biotech, and cognotech
  • In turn, these platform technologies can accelerate progress with abundant low-cost clean energy, nutritious food, healthcare, education, security, creativity, spirituality, and the exploration of marvellous inner and outer worlds

In other words, if developed and used wisely, advanced AI can set humanity free to enjoy much better qualities of life:

However, if developed and used unwisely, advanced AI is likely to cause catastrophe:

  • Via misuse by people who are angry, alienated, or frustrated
  • Via careless use by people who are naive, overconfident, or reckless
  • Via AI operating beyond our understanding and control
  • Via autonomous AI adopting alien modes of rationality and alien codes of ethics

The key difference between these two future scenarios is whether the development and use of AI is wisely steered, or instead follows a default path of deprioritising any concerns about safety:

  • The default path involves AI whose operation is opaque, which behaves deceptively, which lacks moral compass, which can be assigned to all kinds of tasks with destructive side-effects, and which often disregards human intentions
  • Instead, if AI is wisely harnessed, it will deliver value as a tool, but without any intrinsic agency, autonomy, volition, or consciousness
  • Such a tool can have high creativity, but won’t use that creativity for purposes opposed to human wellbeing

To be clear, there is no value in winning a reckless race to be the first to create AI with landmark new features of capability and agency. Such a race is a race to oblivion, also known as a suicide race.

4. The particular hazards of advanced AI

The dangers posed by AI don’t arise from AI in isolation. They involve AI in the hands of fallible, naïve, over-optimistic humans, who are sometimes driven by horrible internal demons. It’s AI summoned and used, not by the better angels of human nature, but by the darker corners of our psychology.

Although we humans are often wonderful, we sometimes do dreadful things to each other – especially when we have become angry, alienated, or frustrated. Add in spiteful ideologies of resentment and hostility, and things can become even uglier.

Placing technology in the hands of people in their worst moments can lead to horrific outcomes. The more powerful the technology, the bigger the potential abomination:

  • The carnage of a frenzied knife attack or a mass shooting (where the technology in question ranges from a deadly sharp knife to an automatic rifle)
  • The chaos when motor vehicles are deliberately propelled at speed into crowds of innocent pedestrians
  • The deaths of everyone on board an airplane, when a depressed air pilot ploughs the craft into a mountainside or deep into an ocean, in a final gesture of defiance to what they see as an unfair, uncaring world
  • The destruction of iconic buildings of a perceived “great satan”, when religious fanatics have commandeered jet airliners in service of the mental pathogen that has taken over their minds
  • The assassination of political or dynastic rivals, by the mixing of biochemicals that are individually harmless, but which in combination are frightfully lethal
  • The mass poisoning of commuters in a city subway, when deadly chemicals are released at the command of a cult leader who fancies himself as the rightful emperor of Japan, and who has beguiled clearly intelligent followers to trust his every word.

How does advanced AI change this pattern of unpleasant possibilities? How is AI a significantly greater threat than earlier technologies? In six ways:

  1. As AI-fuelled automation displaces more people from their work (often to their surprise and shock), it predisposes more people to become bitter and resentful
  2. AI is utilised by merchants of the outrage industrial complex, to convince large numbers of people that their personal circumstance is more appalling than they had previously understood, that a contemptible group of people over there are responsible for this dismal turn of events, and that the appropriate response is to utterly defeat those deplorables
  3. Once people are set on a path to obtain revenge, personal recognition, or just plain pandemonium, AIs can make it much easier for them to access and deploy weapons of mass intimidation and mass destruction
  4. Due to the opaque, inscrutable nature of many AI systems, the actual result of an intended outrage may be considerably worse even than what the perpetrator had in mind; this is similar to how malware sometimes causes much more turmoil than the originator of that malware intended
  5. An AI with sufficient commitment to the goals it has been given will use all its intelligence to avoid being switched off or redirected; this multiplies the possibility that an intended local outrage might spiral into an actual global catastrophe
  6. An attack powered by fast-evolving AI can strike unexpectedly at core aspects of the infrastructure of human civilization – our shared biology, our financial systems, our information networks, or our hair-trigger weaponry – exploiting any of the numerous fragilities in these systems.

And it’s not just missteps from angry, alienated, frustrated people, that we have to worry about. We also need to beware potential cascades of trouble triggered by the careless actions of people who are well-intentioned, but naive, over-optimistic, or simply reckless, in how they use AI.

The more powerful the AI, the greater the dangers.

Finally, the unpredictable nature of emergent intelligence carries with it another fearsome possibility. Namely, a general intelligence with alien thinking modes far beyond our own understanding, might decide to adopt an alien set of ethics, in which the wellbeing of eight billion humans merits only a miniscule consideration.

That’s the argument against simply following a default path of “generate more intelligence, and trust that the outcome is likely to be beneficial for humanity”. It’s an argument that should make everyone pause for thought.

5. A matter of real urgency

How urgent is the task of improving global coordination of the governance of advanced AI?

It is sometimes suggested that progress with advanced AI is slowing down, or is hitting some kind of “wall” or other performance limit. There may be new bottlenecks ahead. Or diseconomies of scale may supersede the phenomenon of economies of scale which has characterised AI research over the last few years.

However, despite these possibilities, the case remains urgent:

  • Even if one approach to improving AI runs out of steam, huge numbers of researchers are experimenting with promising new approaches, including approaches that combine current state-of-the-art methods into new architectures
  • Even if AI stops improving, it is already dangerous enough to risk incidents in which large numbers of people are harmed
  • Even if AI stops improving, clever engineers will find ways to take better advantage of it – thereby further increasing the risks arising, if it is badly configured or manifests unexpected behaviour
  • There is no guarantee that AI will actually stop improving; making that assumption is too much of a risk to take on behalf of the entirety of human civilisation
  • Even if it will take a decade or longer for AI to reach a state in which it poses true risks of global catastrophe, it may also take decades for governance systems to become effective and practical; the lesson from ineffective efforts to prevent runaway climate change are by no means encouraging here
  • Even apart from the task of coordinating matters related to advanced AI, human civilisation faces other deep challenges that also require effective coordination on the global scale – coordination that, as mentioned, is currently failing on numerous grounds.

So, there’s an imperative to “choose coordination not chaos” independent of considering the question of whether advanced AI will lead to abundance or to a new dark age.

6. A promising start and an unfortunate regression

Humanity actually made a decent start in the direction of coordinating the development of advanced AI, at the Global AI Safety Summits in the UK (November 2023) and South Korea (May 2024).

Alas, the next summit in that series, in Paris (February 2025) was overtaken by political correctness, by administrivia, by virtue signalling, and, most of all, by people with a woefully impoverished understanding of the existential opportunities and risks of advanced AI. Evidently, the task of raising true awareness needs to be powerfully re-energised.

There’s still plenty of apparent global cooperation taking place – lots of discussions and conferences and summits, with people applauding the fine-sounding words in each other’s speeches. “Justice and fairness, yeah yeah yeah!” “Transparency and accountability, yeah yeah yeah!” “Apple pie and blockchain, yeah yeah yeah!” “Intergenerational intersectionality, yeah yeah yeah!”

But the problem is the collapse of effective, practical global cooperation, regarding the hard choices about which aspects of advanced AI should be promoted, and which should be restricted.

Numerous would-be coordination bodies are struggling with the same set of issues:

  • It’s much easier to signal virtue than to genuinely act virtuously.
  • Too many of the bureaucrats who run these bodies are out of their depth when it comes to understanding the existential opportunities and risks of advanced AI.
  • Seeing no prospect of meaningful coordination, many of the big tech companies invited to participate do so in a way that obfuscates the real issues while maintaining their public image as “trying their best to do good”.
  • The process is undermined by people who can be called “reckless accelerationists” – people who are willing to gamble that the chaotic processes of creating advanced AI as quickly as possible will somehow result in a safe, beneficial outcome (and, in some cases, these accelerationists would even take a brief perverted pleasure if humanity were rendered extinct by a non-sentient successor AI species); the accelerationists don’t want the public as a whole to be in any position to block their repugnant civilisational Russian roulette.

How to address this dilemma is arguably the question that should transcend all others, regarding the future of humanity.

7. Overcoming the obstacles to effective coordination of the governance of advanced AI

To avoid running aground on the same issues as in the past, it’s important to bear in mind the five main reasons for the failure, so far, of efforts to coordinate the governance of advanced AI. They are:

  • Fear that attempts to control the development of AI will lead to an impoverished future, or a future in which the world is controlled by people from a different nation (e.g. China)
  • Lack of appreciation of the grave perils of the current default chaotic course
  • A worry that any global coordination would lurch toward a global dictatorship, with its own undeniable risks of catastrophe
  • The misapprehension that, without the powers of a global dictatorship, any attempts at global coordination are bound to fail, so they are a waste of time
  • The power that Big Tech possesses, allowing it to ignore half-hearted democratic attempts to steer their activities.

In broad terms, these obstacles can be overcome as follows:

  • Emphasising the positive outcomes, including abundance, freedom, and all-round wellbeing – and avoiding the psychologically destabilising outlook of “AI doomers”
  • Increasing the credibility and relatability of scenarios in which ungoverned advanced AI leads to catastrophe – but also the credibility and relatability of scenarios in which humanity’s chaotic tendencies can be overcome
  • Highlighting previous examples when the governance of breakthrough technology was at least partially successful, rather than developers being able to run amok – examples such as genetic recombination therapies, nuclear proliferation, and alternatives to the chemicals that caused the hole in the ozone layer
  • Demonstrating the key roles that decentralised coordination should play, as a complement to the centralised roles that nation states can play
  • Clarifying how global coordination of advanced AI can start with small agreements and then grow in scale, without individual countries losing sovereignty in any meaningful way.

8. Decentralised reputation management – rewards for good behaviour

What is it that leads individuals to curtail their behaviour, in conformance with a set of standards promoted in support of a collaboration?

In part, it is the threat of sanction or control – whereby an individual might be fined or imprisoned for violating the agreed norms.

But in part, it is because of reputational costs when standards are ignored, side-lined, or cheated. The resulting loss of reputation can result in declining commercial engagement or reduced social involvement. Cheaters and freeloaders risk being excluded from future new opportunities available to other community members.

These reinforcement effects are strongest when the standards received community-wide support while being drafted and adopted – rather than being imposed by what could be seen as outside forces or remote elites.

Some reputation systems operate informally, especially in small or local settings. For activities with a wider involvement, online rating systems can come into their own. For example, consider the reputation systems for reviews of products, in which the reputation of individual reviewers changes the impact of various reviews. There are similarities, as well, to how webpages are ranked when presented in response to search queries: pages which have links from others with high reputation tend in consequence to be placed more prominently in the listing.

Along these lines, reputational ratings can be assigned, to individuals, organisations, corporations, and countries, based on their degree of conformance to agreed principles for trustworthy coordinated AI. Entities with poor AI coordination ratings should be shunned. Other entities that fail to take account of AI coordination ratings when picking suppliers, customers, or partners, should in turn be shunned too. Conversely, entities with high ratings should be embraced and celebrated.

An honest, objective assessment of conformance to the above principles should become more significant, in determining overall reputation, than, for example, wealth, number of online followers, or share price.

Emphatically, the reputation score must be based on actions, not words – on concrete, meaningful steps rather than behind-the-scenes fiddling, and on true virtue rather than virtue-signalling. Accordingly, deep support should be provided for any whistleblowers who observe and report on any cheating or other subterfuge.

In summary, this system involves:

  • Agreement on which types of AI development and deployment to encourage, and which to discourage, or even ban
  • Agreement on how to assign reputational scores, based on conformance to these standards
  • Agreement on what sanctions are appropriate for entities with poor reputations – and, indeed, what special rewards should flow to entities with good reputations.

All three elements on this system need to evolve, not under the dictation of central rulers, but as a result of a grand open conversation, in which ideas rise to the surface if they make good sense, rather than being shouted with the loudest voice.

That is, decentralised mechanisms have a vital role to play in encouraging and implementing wise coordination of advanced AI. But centralised mechanisms have a vital role too, as discussed next.

9. Starting small and then growing in scale

If someone continues to ignore social pressures, and behaves irresponsibly, how can the rest of society constrain them? Ultimately, force needs to be applied. A car driver who recklessly breaks speed limits will be tracked down, asked to stop, and if need be, will be forced off the road. A vendor who recklessly sells food prepared in unhygienic conditions will be fined, forbidden to set up new businesses, and if need be, will be imprisoned. Scientists who experiment with highly infectious biomaterials in unsafe ways will lose their licence and, if need be, their laboratories will be carefully closed down.

That is, society is willing to grant special powers of enforcement to some agents acting on behalf of the entire community.

However, these special powers carry their own risks. They can be abused, in order to support incumbent political leaders against alternative ideas or opposition figures.

The broader picture is as follows: Societies can fail in two ways: too little centralised power, and too much centralised power.

  • In the former case, societies can end up ripped apart by warring tribes, powerful crime families, raiding gangs from neighbouring territories, corporations that act with impunity, and religious ideologues who stamp their contentious visions of “the pure and holy” on unwilling believers and unbelievers alike
  • In the latter case, a state with unchecked power diminishes the rights of citizens, dispenses with the fair rule of law, imprisons potential political opponents, and subverts economic flows for the enrichment of the leadership cadre.

The healthiest societies, therefore, possess both a strong state and a strong civil society. That’s one meaning of the celebrated principle of the separation of powers. The state is empowered to act, decisively if needed, against any individual cancers that would threaten the health of the community. But the state is informed and constrained by independent, well-organised judiciary, media, academia, credible opposition parties, and other institutions of civil society.

It should be the same with the governance of potential rogue or naïve AI developers around the world. Via processes of decentralised deliberations, taking account of input from numerous disciplines, agreement should be reached on which limits are vital to be observed.

Inevitably, different participants in the process will have different priorities for what the agreements should contain. In some cases, these limits imposed might vary between different jurisdictions, within customisation frameworks agreed globally. But there should be clear acceptance that some ways of developing or deploying advanced AIs need to be absolutely prevented. To prevent the agreements from unravelling at the earliest bumps in the road, it will be important that agreements are reached unanimously among the representatives of the jurisdictions where the most powerful collections of AI developers are located.

The process to reach agreement can be likened to the deliberations of a jury in a court case. In most cases, jury members with initially divergent opinions eventually converge on a conclusion. In cases when the process becomes deadlocked, it can be restarted with new representative participants. With the help of expert facilitators – themselves supported by excellent narrow AI tools – creative new solutions can be introduced for consideration, making an ultimate agreement more likely.

To start with, these agreements might be relatively small in scope, such as “don’t place the launch of nuclear weapons under AI control”. Over time, as confidence builds, the agreements will surely grow. That’s because of the shared recognition that so much is at stake.

Of course, for such agreements to be meaningful, there needs to be a reliable enforcement mechanism. That’s where the state needs to act – with the support and approval of civil society.

Within entire countries that sign up to this AI coordination framework, enforcement is relatively straightforward. The same mechanisms that enforce other laws can be brought to bear against any rogue or naïve AI developers.

The challenging part is when countries fail to sign up to this framework, or do so deceitfully, that is, with no intention of keeping their promises. In such a case, it will fall to other countries to ensure conformance, via, in the first place, measures of economic sanction.

To make this work, all that’s necessary is that a sufficient number of powerful countries sign up to this agreement. For example, if the G7 do so, plus China and India, along with countries that are “bubbling under” G7 admission (like Australia, South Korea, and Brazil), that should be sufficient. Happily, there are many AI experts in all these countries who have broad sympathies to the kinds of principles spelt out in this document.

As for potential maverick nations such as Russia and North Korea, they will have to weigh up the arguments. They should understand – like all other countries – that respecting such agreements is in their own self-interest. To help them reach such an understanding, appropriate pressure from China, the USA, and the rest of the world should make a decisive difference.

This won’t be easy. At this pivotal point of history, humanity is being challenged to use our greatest strength in a more profound way than ever before – namely, our ability to collaborate despite numerous differences. On reflection, it shouldn’t be a surprise that the unprecedented challenges of advanced AI technology will require an unprecedented calibre of human collaboration.

If we fail to bring together our best talents in a positive collaboration, we will, sadly, fulfil the pessimistic forecast of the eighteenth-century Anglo-Irish statesman Edmund Burke, paraphrased as follows: “The only thing necessary for the triumph of evil is that good men fail to associate, and do nothing”. (The original quote is this: “No man … can flatter himself that his single, unsupported, desultory, unsystematic endeavours are of power to defeat the subtle designs and united cabals of ambitious citizens. When bad men combine, the good must associate; else they will fall, one by one, an unpitied sacrifice in a contemptible struggle.”) Or, updating the wording slightly, “The only thing necessary for chaos to prevail is that good men fail to coordinate wisely”.

A remark from the other side of the Atlantic from roughly the same time, attributed to Benjamin Franklin, conveys the same thought in different language: “We must… all hang together, or assuredly we shall all hang separately”.

10. Summary: The nucleus of a wider agreement, and call to action

Enthusiasm for agreements to collaborate on the governance of advanced AIs will grow as a set of insights are understood more widely and more deeply. These insights can be stated as follows:

  1. It’s in the mutual self-interest of every country to constrain the development and deployment of what could become catastrophically dangerous AI; that is, there’s no point in winning what could be a reckless suicide race to create powerful new types of AI before anyone else
  2. The major economic and humanitarian benefits that people hope will be delivered by the hasty development of advanced AI (benefits including all-round abundance, as well as solutions to various existential risks), can in fact be delivered much more reliably by AI systems that are constrained, and by development systems that are coordinated rather than chaotic
  3. A number of attractive ideas already exist regarding potential policy measures (regulations and incentives) which can be adopted, around the world, to prevent the development and deployment of what could become catastrophic AI – for example, measures to control the spread and use of vast computing resources, or to disallow AIs that use deception to advance their goals
  4. A number of good ideas also exist and are ready to be adopted around the world, regarding options for monitoring and auditing, to ensure the strict application of the agreed policy measures – and to prevent malign action by groups or individuals that have, so far, failed to sign up to these policies, or who wish to cheat them
  5. All of the above can be achieved without any detrimental loss of individual sovereignty: the leaders of countries can remain masters within their own realms, as they desire, provided that the above basic AI coordination framework is adopted and maintained
  6. All of the above can be achieved in a way that supports evolutionary changes in the AI coordination framework as more insight is obtained; in other words, this system can (and must) be agile rather than static
  7. Even though this coordination framework is yet to be fully agreed, there are plenty of ideas for how it can be rapidly developed, so long as that project is given sufficient resources, and the best brains from multiple disciplines are encouraged to give it their full attention
  8. Ring-fencing sufficient resources to further develop this AI coordination framework, and associated reputational ratings systems, should be a central part of every budget
  9. Reputational ratings that can be assigned, based on the above principles, will play a major role in altering behaviours of the many entities involved in the development and deployment of advanced AI.

Or, to summarise this summary: Choose coordination not chaos, so AI brings abundance for all.

Now is the time to develop these ideas further (by all means experiment with ways to simplify their expression), to find ways to spread them more effectively, and to be alert for newer, better insights that arise from the resulting open global conversation.

Other ideas considered

The ideas presented above deserve attention, regardless of which campaign slogans are adopted.

For comparison, here is a list of other possible campaign slogans, along with reservations that have been raised about each of them:

  • “Pause AI” (too negative)
  • “Control AI” (too negative)
  • “Keep the Future Human” (insufficiently aspirational)
  • “Take Back Control from Big Tech” (doesn’t characterise the problem accurately enough)
  • “Safe AI for sustainable superabundance” (overly complex concepts)
  • “Choose tool AI instead of AGI” (lacks a “why”)
  • “Kind AI for a kinder world” (perhaps too vague)
  • “Narrow AI to broaden humanity’s potential” (probably too subtle)
  • “Harness AI to liberate humanity” (terminology overly scholarly or conceptual).

Also for comparison, consider the following set of slogans from other fields:

  • “Yes we can” (Barack Obama, 2008)
  • “Make America great again” (Donald Trump, 2016)
  • “Take back control” (UK Brexit slogan)
  • “Think different” (Apple)
  • “Because you’re worth it” (L’Oréal)
  • “Black lives matter”
  • “Make love, not war”
  • “For the Many, Not the Few” (Jeremy Corbyn, 2017)
  • “Get Brexit done” (Boris Johnston, 2019)
  • “Not Me. Us” (Bernie Sanders, 2020)
  • “We shall fight them on the beaches” (Winston Churchill, 1940)
  • “It’s Morning Again in America” (Ronald Reagan, 1984)
  • “Stay Home. Save Lives” (Covid-19 messaging)
  • “Clunk click every trip” (encouraging the use of seat belts in cars)
  • “We go to the moon, not because it is easy, but because it is hard” (JFK, 1962)
  • “A microcomputer on every desk and in every home running Microsoft software” (Bill Gates, 1975)
  • “To organise the world’s information and make it universally accessible and useful” (Google, 1998)
  • “Accelerating the world’s transition to sustainable energy” (Tesla, 2016)
  • “Workers of the world, unite – you have nothing to lose but your chains” (Karl Marx, 1848)
  • “From each according to his ability, to each according to his needs” (Karl Marx, 1875)

Comments are welcome on any ideas in this article. Later revisions of this article may incorporate improvements arising from these comments.

Postscript

New suggestions under consideration, following the initial publication of this article:

  • “Harness AI now” (Robert Whitfield)

7 June 2019

Feedback on what goals the UK should have in mind for 2035

Filed under: Abundance, BHAG, politics, TPUK, vision — Tags: , , , , — David Wood @ 1:56 pm

Some political parties are preoccupied with short-term matters.

It’s true that many short-term matters demand attention. But we need to take the time to consider, as well, some important longer-term risks and issues.

If we give these longer-term matters too little attention, we may wake up one morning and bitterly regret our previous state of distraction. By then, we may have missed the chance to avoid an enormous setback. It could also be too late to take advantage of what previously was a very positive opportunity.

For these reasons, the Transhumanist Party UK seeks to raise the focus of a number of transformations that could take place in the UK, between now and 2035.

Rather than having a manifesto for the next, say, five years, the Party is developing a vision for the year 2035 – a vision of much greater human flourishing.

It’s a vision in which there will be enough for everyone to have an excellent quality of life. No one should lack access to healthcare, shelter, nourishment, information, education, material goods, social engagement, free expression, or artistic endeavour.

The vision also includes a set of strategies by which the current situation (2019) could be transformed, step by step, into the desired future state (2035).

Key to these strategies is for society to take wise advantage of the remarkable capabilities of twenty-first century science and technology: robotics, biotech, neurotech, greentech, collabtech, artificial intelligence, and much more. These technologies can provide all of us with the means to live better than well – to be healthier and fitter than ever before; nourished emotionally and spiritually as well as physically; and living at peace with ourselves, the environment, and our neighbours both near and far.

Alongside science and technology, there’s a vital role that politics needs to play:

  • Action to encourage the kind of positive collaboration which might otherwise be undermined by free-riders
  • Action to adjust the set of subsidies, incentives, constraints, and legal frameworks under which we all operate
  • Action to protect the citizenry as a whole from the abuse of power by any groups with monopoly or near-monopoly status
  • Action to ensure that the full set of “externalities” (both beneficial and detrimental) of market transactions are properly considered, in a timely manner.

To make this vision more concrete, the Party wishes to identify a set of specific goals for the UK for the year 2035. At present, there are 16 goals under consideration. These goals are briefly introduced in a video:

As you can see, the video invites viewers to give their feedback, by means of an online survey. The survey collects opinions about the various goals: are they good as they stand? Too timid? Too ambitious? A bad idea? Uninteresting? Or something else?

The survey also invites ideas about other goals that should perhaps be added into the mix.

Since the survey has been launched, feedback has been accumulating. I’d like to share some of that feedback now, along with some of my own personal responses.

The most unconditionally popular goal so far

Of the 16 goals proposed, the one which has the highest number of responses “Good as it stands” is Goal 4, “Thanks to innovations in recycling, manufacturing, and waste management, the UK will be zero waste, and will have no adverse impact on the environment.”

(To see the rationale for each goal, along with ideas on measurement, the current baseline, and the strategy to achieve the goal, see the document on the Party website.)

That goal has, so far, been evaluated as “Good as it stands” by 84% of respondents.

One respondent gave this comment:

Legislation and Transparency are equally as important here, to gain the public’s trust that there is actual quantified benefits from this, or rather to de-abstractify recycling and make it more tangible and not just ‘another bin’

My response: succeeding with this goal will involve more than the actions of individuals putting materials into different recycling bins.

Research from the Stockholm Resilience Centre has identified nine “planetary boundaries” where human activity is at risk of pushing the environment into potentially very dangerous states of affairs.

For each of these planetary boundaries, the same themes emerge:

  • Methods are known that would replace present unsustainable practices with sustainable ones.
  • By following these methods, life would be plentiful for all, without detracting in any way from the potential for ongoing flourishing in the longer term.
  • However, the transition from unsustainable to sustainable practices requires overcoming very significant inertia in existing systems.
  • In some cases, what’s also required is vigorous research and development, to turn ideas for new solutions into practical realities.
  • Unfortunately, in the absence of short-term business cases, this research and development fails to receive the investment it requires.

In each case, the solution also follows the same principles. Society as a whole needs to agree on prioritising research and development of various solutions. Society as a whole needs to agree on penalties and taxes that should be applied to increasingly discourage unsustainable practices. And society as a whole needs to provide a social safety net to assist those peoples whose livelihoods are adversely impacted by these changes.

Left to its own devices, the free market is unlikely to reach the same conclusions. Instead, because it fails to assign proper values to various externalities, the market will produce harmful results. Accordingly, these are cases when society as a whole needs to constrain and steer the operation of the free market. In other words, democratic politics needs to exert itself.

2nd equal most popular goals

The 2nd equal most popular goal is Goal 7, “There will be no homelessness and no involuntary hunger”, with 74% responses judging it “Good as it stands”. Disagreeing, 11% of respondents judged it as “Too ambitious”. Here’s an excerpt from the proposed strategy to achieve this goal:

The construction industry should be assessed, not just on its profits, but on its provision of affordable, good quality homes.

Consider the techniques used by the company Broad Sustainable Building, when it erected a 57-storey building in Changsha, capital city of Hunan province in China, in just 19 working days. That’s a rate of three storeys per day. Key to that speed was the use of prefabricated units. Other important innovations in construction techniques include 3D printing, robotic construction, inspection by aerial drones, and new materials with unprecedented strength and resilience.

Similar techniques can in principle be used, not just to generate new buildings where none presently exist, but also to refurbish existing buildings – regenerating them from undesirable hangovers from previous eras into highly desirable contemporary accommodation.

With sufficient political desire, these techniques offer the promise that prices for property over the next 16 years might follow the same remarkable downwards trajectory witnessed in many other product areas – such as TVs, LCD screens, personal computers and smartphones, kitchen appliances, home robotics kits, genetic testing services, and many types of clothing…

Finally, a proportion of cases of homelessness arise, not from shortage of available accommodation, but from individuals suffering psychological issues. This element of homelessness will be addressed by the measures reducing mental health problems to less than 1% of the population.

The other 2nd equal most popular goal is Goal 3, “Thanks to improved green energy management, the UK will be carbon-neutral”, also with 74% responses judging it “Good as it stands”. In this case, most of the dissenting opinions (16%) held that the goal is “Too timid” – namely, that carbon neutrality should be achieved before 2035.

For the record, 4th equal in this ranking, with 68% unconditional positive assessment, were:

  • Goal 6: “World-class education to postgraduate level will be freely available to everyone via online access”
  • Goal 16: “The UK will be part of an organisation that maintains a continuous human presence on Mars”

Least popular goals

At the other end of this particular spectrum, three goals are currently tied as having the least popular support in the formats stated: 32%.

This includes Goal 9, “The UK will be part of a global “open borders” community of at least 25% of the earth’s population”. One respondent gave this comment:

Seems absolutely unworkable, would require other countries to have same policy, would have to all be developed countries. Massively problematic and controversial with no link to ideology of transhumanism

And here’s another comment:

No need to work for a living, no homelessness and open borders. What can go wrong?

And yet another:

This can’t happen until wealth/resource distribution is made equitable – otherwise we’d all be crammed in Bladerunner style cities. Not a desirable outcome.

My reply is that the detailed proposal isn’t for unconditional free travel between any two countries, but for a system that includes many checks and balances. As for the relevance to transhumanism, the actual relevance is to the improvement of human flourishing. Freedom of movement opens up many new opportunities. Indeed, migration has been found to have considerable net positive effects on the UK, including productivity, public finances, cultural richness, and individuals’ well-being. Flows of money and ideas in the reverse direction also benefit the original countries of the immigrants.

Another equal bottom goal, by this ranking, is Goal 10, “Voters will no longer routinely assess politicians as self-serving, untrustworthy, or incompetent”. 26% of respondents rated this as “Too ambitious”, and 11% as “Uninteresting”.

My reply in this case is that politicians in at least some other countries have a higher reputation than in the UK. These countries include Denmark (the top of the list), Switzerland, Netherlands, Luxembourg, Norway, Finland, Sweden, and Iceland.

What’s more, a number of practices – combining technological innovation with social innovation – seem capable of increasing the level of trust and respect for politicians:

  • Increased transparency, to avoid any suspicions of hidden motivations or vested interests
  • Automated real-time fact-checking, so that politicians know any distortions of the truth will be quickly pointed out
  • Encouragement of individual politicians with high ethical standards and integrity
  • Enforcement of penalties in cases when politicians knowingly pass on false information
  • Easier mechanisms for the electorate to be able to quickly “recall” a politician when they have lost the trust of voters
  • Improvements in mental health for everyone, including politicians, thereby diminishing tendencies for dysfunctional behaviour
  • Diminished power for political parties to constrain how individual politicians express themselves, allowing more politicians to speak according to their own conscience.

A role can also be explored for regular psychometric assessment of politicians.

The third goal in this grouping of the least popular is Goal 13, “Cryonic suspension will be available to all, on point of death, on the NHS”. 26% of respondents judged this as “Too ambitious”, and 11% as “A bad idea”. One respondent commented “Why not let people die when they are ready?” and other simply wrote “Mad shit”.

It’s true that there currently are many factors that discourage people from signing up for cryonics preservation. These include costs, problems arranging transport of the body overseas to a location where the storage of bodies is legal, the perceived low likelihood of a subsequent successful reanimation, lack of evidence of reanimation of larger biological organs, dislike of appearing to be a “crank”, apprehension over tension from family members (exacerbated if family members expect to inherit funds that are instead allocated to cryopreservation services), occasional mistrust over the motives of the cryonics organisations (which are sometimes alleged – with no good evidence – to be motivated by commercial considerations), and uncertainty over which provider should be preferred.

However, I foresee a big change in the public mindset when there’s a convincing demonstration of successful reanimation of larger biological organisms or organ. What’s more, as in numerous other fields of life, costs will decline and quality increase as the total number of experiences of a product or service increases. These are known as scale effects.

Goals receiving broad support

Now let’s consider a different ranking, when the votes for “Good as it stands” and “Too timid” are added together. This indicates strong overall support for the idea of the goal, with the proviso that many respondents would prefer a more aggressive timescale.

Actually this doesn’t change the results much. Compared to the goals already covered, there’s only one new entrant in the top 5, namely at position 3, with a combined positive rating of 84%. That’s for Goal 1, “The average healthspan in the UK will be at least 90 years”. 42% rated this “Good as it stands” and another 42% rated it as “Too timid”.

For the record, top equal by this ranking were Goal 3 (74% + 16%) and Goal 4 (84% + 5%).

The only other goal with a “Too timid” rating of greater than 30% was Goal 15, “Fusion will be generating at least 1% of the energy used in the UK” (32%).

The goals most actively disliked

Here’s yet another way of viewing the data: the goals which had the largest number of “A bad idea” responses.

By this measure, the goal most actively disliked (with 21% judging it “A bad idea”) was Goal 11, “Parliament will involve a close partnership with a ‘House of AI’ (or similar) revising chamber”. One respondent commented they were “wary – AI could be Stalinist in all but name in their goal setting and means”.

My reply: To be successful, the envisioned House of AI will need the following support:

  • All algorithms used in these AI systems need to be in the public domain, and to pass ongoing reviews about their transparency and reliability
  • Opaque algorithms, or other algorithms whose model of operation remain poorly understood, need to be retired, or evolved in ways addressing their shortcomings
  • The House of AI will not be dependent on any systems owned or operated by commercial entities; instead, it will be “AI of the people, by the people, for the people”.

Public funding will likely need to be allocated to develop these systems, rather than waiting for commercial companies to create them.

The second most actively disliked goal was Goal 5, “Automation will remove the need for anyone to earn money by working” (16%). Here are three comments from respondents:

Unlikely to receive support, most people like the idea of work. Plus there’s nothing the party can do to achieve this automation, depends on tech progress. UBI could be good.

What will be the purpose of humans?

It removes the need to work because their needs are being met by…. what? Universal Basic Income? Automation by itself cuts out the need for employers to pay humans to do the work but it doesn’t by itself ensure that people’s need will be met otherwise.

I’ve written on this topic many times in the past – including in Chapter 4, “Work and purpose “of my previous book, “Transcending Politics” (audio recording available here). There absolutely are political actions which can be taken, to accelerate the appropriate technological innovations, and to defuse the tensions that will arise if the fruits of technological progress end up dramatically increasing the inequality levels in society.

Note, by the way, that this goal does not focus on bringing in a UBI. There’s a lot more to it than that.

Clearly there’s work to be done to improve the communication of the underlying ideas in this case!

Goals that are generally unpopular

For a final way of ranking the data, let’s add together the votes for “A bad idea” and “Too ambitious”. This indicates ideas which are generally unpopular, in their current form of expression.

Top of this ranking, with 42%, is Goal 8, “The crime rate will have been reduced by at least 90%”. Indeed, the 42% all judged this goal as “Too ambitious”. One comment received was

Doesn’t seem within the power of any political party to achieve this, except a surveillance state

Here’s an excerpt of the strategy proposed to address this issue:

The initiatives to improve mental health, to eliminate homelessness, and to remove the need to work to earn an income, should all contribute to reducing the social and psychological pressures that lead to criminal acts.

However, even if only a small proportion of the population remain inclined to criminal acts, the overall crime rate could still remain too high. That’s because small groups of people will be able to take advantage of technology to carry out lots of crime in parallel – via systems such as “ransomware as a service” or “intelligent malware as a service”. The ability of technology to multiply human power means that just a few people with criminal intent could give rise to large amounts of crime.

That raises the priority for software systems to be highly secure and reliable. It also raises the priority of intelligent surveillance of the actions of people who might carry out crimes. This last measure is potentially controversial, since it allows part of the state to monitor citizens in a way that could be considered deeply intrusive. For this reason, access to this surveillance data will need to be restricted to trustworthy parts of the overall public apparatus – similar to the way that doctors are trusted with sensitive medical information. In turn, this highlights the importance of initiatives that increase the trustworthiness of key elements of our national infrastructure.

On a practical basis, initiatives to understand and reduce particular types of crime should be formed, starting with the types of crime (such as violent crime) that have the biggest negative impact on people’s lives.

Second in this ranking of general unpopularity, at 37%, is Goal 13, on cryonics, already mentioned above.

Third, at 32%, is Goal 11, on the House of AI, also already mentioned.

Suggestions for other goals

Respondents offered a range of suggestions for other goals that should be included. Here are a sample, along with brief replies from me:

Economic growth through these goals needs to be quantified somehow.

I’m unconvinced that economic growth needs to be prioritised. Instead, what’s important is agreement on a more appropriate measure to replace the use of GDP. That could be a good goal to consider.

Support anti-ageing research, gene editing research, mind uploading tech, AI alignment research, legalisation of most psychedelics

In general the goals have avoided targeting technology for technology’s sake. Instead, technology is introduced only because it supports the goals of improved overall human flourishing.

I think there should be a much greater focus in our education system on developing critical thinking skills, and a more interdisciplinary approach to subjects should be considered. Regurgitating information is much less important in a technologically advanced society where all information is a few clicks away and our schooling should reflect that.

Agreed: the statement of the education goal should probably be reworded to take these points into account.

A new public transport network; Given advances in technology regarding AI and electrical vehicles, a goal on par with others you’ve listed here would be to develop a transport system to replace cars with a decentralised public transportation network, whereby ownership of cars is replaced with the use of automated vehicles on a per journey basis, thus promoting better use of resources and driving down pollution, alongside hopefully reducing vehicular incidents.

That’s an interesting suggestion. I wonder how others think about it?

Routine near-earth asteroid mining to combat earthside resource depletion.

Asteroid mining is briefly mentioned in Goal 4, on recycling and zero waste.

Overthrow of capitalism and class relations.

Ah, I would prefer to transcend capitalism than to overthrow it. I see two mirror problems in discussing the merits of free markets: pro-market fundamentalism, and anti-market fundamentalism. I say a lot more on that topic in Chapter 9, Markets and fundamentalism”, of my book “Transcending Politics”.

The right to complete freedom over our own bodies should be recognised in law. We should be free to modify our bodies and minds through e.g. implants, drugs, software, bioware, as long as there is no significant risk of harm to others.

Yes, I see the value of including such a goal. We’ll need work to explore what’s meant by “risk of harm to others”.

UK will be part of the moon-shot Human WBE [whole brain emulation] project after being successful in supporting the previous Mouse WBE moon-shot project.

Yes, that’s an interesting suggestion too. Personally I see the WBE project as being longer-term, but hey, that may change!

Achieving many of the laudable goals rests on reshaping the current system of capitalism, but that itself is not a goal. It should be.

I’m open to suggestions for wording on this, to make it measurable.

Deaths due to RTA [road traffic accidents] cut to near zero

That’s another interesting suggestion. But it may not be on the same level as some of the existing ones. I’m open to feedback here!

Next steps

The Party is very grateful for the general feedback received so far, and looks forward to receiving more!

Discussion can also take place on the Party’s Discourse, https://discourse.transhumanistparty.org.uk/. Anyone is welcome to create an account on that site and become involved in the conversations there.

Some parts of the Discourse are reserved for paid-up members of the Party. It will be these members who take the final decisions as to which goals to prioritise.

21 May 2015

Anticipating 2040: The triple A, triple h+ vision

Abundance Access Action

The following vision arises from discussions with colleagues in the Transhumanist Party.

TPUK_LOGO3_400pxAbundance

Abundance – sustainable abundance – is just around the corner – provided we humans collectively get our act together.

We have within our grasp a sustainable abundance of renewable energy, material goods, health, longevity, intelligence, creativity, freedom, and positive experience.

This can be attained within one human generation, by wisely accelerating the green technology revolution – including stem cell therapies, 3D printing, prosthetics, robotics, nanotechnology, genetic engineering, synthetic biology, neuro-enhancement, artificial intelligence, and supercomputing.

TPUK_LOGO2_400pxAccess

The rich fruits of technology – abundance – can and should be provided for all, not just for those who manage to rise to the top of the present-day social struggle.

A bold reorganisation of society can and should take place in parallel with the green technology revolution – so that everyone can freely access the education, healthcare, and everything else needed to flourish as a full member of society.

Action

TPUK_LOGO1_400pxTo channel the energies of industry, business, finance, universities, and the media, for a richly positive outcome within the next generation, swift action is needed:

  • Widespread education on the opportunities – and risks – of new technology
  • Regulations and checks to counter short-termist action by incumbent vested interests
  • The celebration and enablement of proactive innovation for the common good
  • The promotion of scientific, rational, evidence-based methods for taking decisions, rather than ideologies
  • Transformation of our democracy so that governance benefits from the wisdom of all of society, and serves the genuine needs of everyone, rather than perpetuating the existing establishment.

Transhumanism 2040

2040Within one generation – 25 years, that is, by 2040 – human society can and should be radically transformed.

This next step of conscious evolution is called transhumanism. Transhumanists see, and welcome, the opportunity to intelligently redesign humanity, drawing wisely on the best resources of existing humanity.

The transhumanist party is the party of abundance, access, and action. It is the party with a programme to transcend (overcome) our ingrained human limitations – limitations of animal biology, primate psychology, antiquated philosophy, and 20th century social structures.

Transhumanism 2020

2020As education spreads about the potential for a transhumanist future of abundance, access, and action – and as tangible transhumanist projects are seen to be having an increasingly positive political impact – more and more people will start to identify themselves as transhumanists.

This growing movement will have consequences around the world. For example, in the general election in 2020 in the UK, there may well be, in every constituency, either a candidate from the Transhumanist Party, or a candidate from one of the other parties who openly and proudly identifies as a transhumanist.

The political landscape will never be the same again.

Call to action

To offer support to the Transhumanist Party in the UK (regardless of where you are based in the world), you can join the party by clicking the following PayPal button:

Join now

Membership costs £25 per annum. Members will be invited to participate in internal party discussions of our roadmap.

For information about the Transhumanist Party in other parts of the world, see http://transhumanistpartyglobal.org/.

For a worldwide transhumanist network without an overt political angle, consider joining Humanity+.

To discuss the politics of the future, without any exclusive link to the Transhumanist Party, consider participating in one of the Transpolitica projects – for example, the project to publish the book “Politics 2.0”.

Anticipating the Transhumanist Party roadmap to 2040

Footnote: Look out for more news of a conference to be held in London during Autumn (*), entitled “Anticipating 2040: The Transhumanist Party roadmap”, featuring speakers, debates, open plenaries, and closed party sessions.

If anyone would like to speak at this event, please get in touch.

Anticipating 2040
(*) Possible date is 3-4 October 2015, though planning is presently at a preliminary stage.

 

Blog at WordPress.com.