dw2

15 May 2022

Timeline to 2045: questions answered

This is a follow-up to my previous post, containing more of the material that I submitted around five weeks ago to the FLI World Building competition. In this case, the requirement was to answer 13 questions, with answers limited to 250 words in each case.

Q1: AGI has existed for years, but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?

The Global AGI safety project was one of the most momentous and challenging in human history.

The centrepiece of that project was the set of “Singularity Principles” that had first appeared in print in the book Vital Foresight in 2021, and which were developed in additional publications in subsequent years – a set of recommendations with the declared goal of increasing the likelihood that oncoming disruptive technological changes would have outcomes that are profoundly positive for humanity, rather than deeply detrimental. The principles split into four sections:

  1. A focus, in advance, on the goals and outcomes that were being sought from particular technologies
  2. Analysis of the intrinsic characteristics that are desirable in technological solutions
  3. Analysis of methods to ensure that development takes place responsibly
  4. And a meta-analysis – principles about how this overall set of recommendations could itself evolve further over time, and principles for how to increase the likelihood that these recommendations would be applied in practice rather than simply being some kind of wishful thinking.

What drove increasing support for these principles was a growing awareness, shared around the world, of the risks of cataclysmic outcomes that could arise all too easily from increasingly powerful AI, even when everyone involved had good intentions. This shared sense of danger caused even profound ideological enemies to gather together on a regular basis to review joint progress toward fulfilment of the Singularity Principles, as well as to evolve and refine these Principles.

Q2: The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?

One of the key principles programmed into every advanced AI, from the late 2020s onward, was that no AI should seize or manipulate resources owned by any other AI. Instead, AIs should operate only with resources that have been explicitly provided to them. That prevented any hostile takeover of less capable AIs by more powerful competitors. Accordingly, a community of different AIs coexisted, with differing styles and capabilities.

However, in parallel, the various AIs naturally started to interact with each other, offering services to each other in response to expressions of need. The outcome of this interaction was a blurring of the boundaries between different AIs. Thus, by the 2040s, it was no longer meaningful to distinguish between what had originally been separate pieces of software. Instead of referring to “the Alphabet AGI” or “the Tencent AGI”, and so on, people just talked about “the AGI” or even “AGI”.

The resulting AGI was, however, put to different purposes in different parts of the world, dependent on the policies pursued by the local political leaders.

Q3: How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?

The 2020s were a decade of turbulence, in which a number of arms races proceeded at pace, and when conflict several times came close to spilling over from being latent and implied (“cold”) to being active (“hot”):

  • The great cyber war of 2024 between Iran and Israel
  • Turmoil inside many countries in 2026, associated with the fall from power of the president of Russia
  • Exchanges of small numbers of missiles between North and South Korea in 2027
  • An intense cyber battle in 2028 over the future of an independent Taiwan.

These conflicts resulted in a renewed “never again” global focus to avoid any future recurrences. A new generation of political leaders resolved that, regardless of their many differences, they would put particular kinds of weapons beyond use.

Key to this “never again” commitment was an agreement on “global AI monitoring” – the use of independent narrow AIs to monitor all developments and deployments of potential weapons of mass destruction. That agreement took inspiration from previous international agreements that instituted regular independent monitoring of chemical and biological weapons.

Initial public distrust of the associated global surveillance systems was overcome, in stages, by demonstrations of the inherently trustworthy nature of the software used in these systems – software that adapted various counterintuitive but profound cryptographic ideas from the blockchain discussions of the early and mid-2020s.

Q4: In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that, if at all?

Between 2024 and 2032, the US switched its politics from a troubled bipolar system, with Republicans and Democrats battling each other with intense hostility, into a multi-party system, with a dynamic fluidity of new electoral groupings. The winner of the 2032 election was, for the first time since the 1850s, from neither of the formerly dominant parties. What enabled this transition was the adoption, in stages, of ranked choice voting, in which electors could indicate a sequence of which candidates they preferred. This change enabled electors to express interest in new parties without fearing their votes would be “wasted” or would inadvertently allow the election of particularly detested candidates.

The EU led the way in adoption of a “house of AI” as a reviewing body for proposed legislation. Legislation proposed by human politicians was examined by AI, resulting in suggested amendments, along with detailed explanations from the AI of reasons for making these changes. The EU left the ultimate decisions – whether or not to accept the suggestions – in the hands of human politicians. Over time, AI judgements were accepted on more and more occasions, but never uncritically.

China remained apprehensive until the mid-2030s about adopting multi-party politics with full tolerance of dissenting opinions. This apprehension was rooted in historic distrust of the apparent anarchy and dysfunction of politicians who needed to win approval of seemingly fickle electors. However, as AI evidently improved the calibre of online public discussion, with its real-time fact-checking, the Chinese system embraced fuller democratic reforms.

Q5: Is the global distribution of wealth (as measured say by national or international Gini coefficients) more, or less, unequal than 2022’s, and by how much? How did it get that way?

The global distribution of wealth became more unequal during the 2020s before becoming less unequal during the 2030s.

Various factors contributed to inequality increasing:

  • “Winner takes all”: Companies offering second-best products were unable to survive in the marketplace. Swift flows of both information and goods meant that all customers knew about better products and could easily purchase them
  • Financial rewards from the successes of companies increasingly flowed to the owners of the capital deployed, rather than to the people supplying skills and services. That’s because more of the skills and services could be supplied by automation, driving down the salaries that could be claimed by people who were offering the same skills and services
  • The factors that made some products better than others increasingly involved technological platforms, such as the latest AI systems, that were owned by a very small number of companies
  • Companies were able to restructure themselves ingeniously in order to take advantage of tax loopholes and special deals offered by countries desperate for at least some tax revenue.

What caused these trends to reverse was, in short, better politics:

  • Smart collaboration between the national governments of the world, avoiding tax loopholes
  • Recognition by greater numbers of voters of the profound merits of greater redistribution of the fruits of the remarkable abundance of NBIC technologies, as the percentage of people in work declined, and as the problems were more fully recognised of parts of society being “left behind”.

Q6: What is a major problem that AI has solved in your world, and how did it do so?

AI made many key contributions toward the solution of climate change:

  • By enabling more realistic and complete models of all aspects of the climate, including potential tipping points ahead of major climate phase transitions
  • By improving the design of alternative energy sources, including ground-based geothermal, high-altitude winds, ocean-based waves, space-based solar, and several different types of nuclear energy
  • Very significantly, by accelerating designs of commercially meaningful nuclear fusion
  • By identifying the types of “negative emissions technologies” that had the potential to scale up quickly in effectiveness
  • By accelerating the adoption of improved “cultivated meat” as sources of food that had many advantages over methods of animal-based agriculture, namely, addressing issues with land use, water use, antibiotics use, and greenhouse gas emissions, and putting an end to the vile practice of the mass slaughter of sentient creatures
  • By assisting the design of new types of cement, glass, plastics, fertilisers, and other materials whose manufacture had previously caused large emissions of greenhouse gases
  • By recommending the sorts of marketing messages that were most effective in changing the minds of previous opponents of effective action.

To be clear, AI did this as part of “NBIC convergence”, in which there are mutual positive feedback loops between progress in each of nanotech, biotech, infotech, and cognotech.

Q7: What is a new social institution that has played an important role in the development of your world?

The G7 group of the democratic countries with the largest economies transitioned in 2023 into the D16, with a sharper commitment than before to championing the core values of democracy: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

The D16 was envisioned from the beginning as intended to grow in size, to become a global complement to the functioning of the United Nations, able to operate in circumstances that would have resulted in a veto at the UN from countries that paid only lip service to democracy.

One of the first projects of the D16 was to revise the Universal Declaration of Human Rights from the form initially approved by the United Nations General Assembly in 1948, to take account of the opportunities and threats from new technologies, including what are known as “transhuman rights”.

In parallel, another project reached agreement on how to measure an “Index of Human Flourishing”, that could replace the economic measure GDP (Gross Domestic Product) as the de-facto principal indication of wellbeing of societies.

The group formally became the D40 in 2030 and the D90 in 2034. By that time, the D90 was central to agreements to vigorously impose an updated version of the Singularity Principles. Any group anywhere in the world – inside or outside the D90 – that sought to work around these principles, was effectively shut down due to strict economic sanctions.

Q8: What is a new non-AI technology that has played an important role in the development of your world?

Numerous fields have been transformed by atomically precise manufacturing, involving synthetic nanoscale assembly factories. These had been envisioned in various ways by Richard Feynman in 1959 and Eric Drexler in 1986, but did not become commercially viable until the early 2030s.

It had long been recognised that an “existence proof” for nanotechnology was furnished by the operation of ribosomes inside biological cells, with their systematic assembly of proteins from genetic instructions. However, creation of comparable synthetic systems needed to wait for assistance in both design and initial assembly from increasingly sophisticated AI. (DeepMind’s AlphaFold software had given an early indication of these possibilities back in 2021.) Once the process had started, significant self-improvement loops soon accelerated, with each new generation of nanotechnology assisting in the creation of a subsequent better generation.

The benefits flowed both ways: nanotech precision allowed breakthroughs in the manufacture of new types of computer hardware, including quantum computers; these in turn supported better types of AI algorithms.

Nanotech had dramatic positive impact on practices in the production of food, accommodation, clothing, and all sorts of consumer goods. Three areas particularly deserve mention:

  • Precise medical interventions, to repair damage to biological systems
  • Systems to repair damage to the environment as a whole, via a mixture of recycling and regeneration, as well as “negative emissions technologies” operating in the atmosphere
  • Clean energy sources operating at ever larger scale, including atomic-powered batteries

Q9: What changes to the way countries govern the development and/or deployment and/or use of emerging technologies (including AI), if any, played an important role in the development of your world?

Effective governance of emerging technologies involved both voluntary cooperation and enforced cooperation.

Voluntary cooperation – a desire to avoid actions that could lead to terrible outcomes – depended in turn on:

  • An awareness of the risk pathways – similar to the way that Carl Sagan and his colleagues vividly brought to the attention of world leaders in the early 1980s the potential global catastrophe of “nuclear winter”
  • An understanding that the restrictions being accepted would not hinder the development of truly beneficial products
  • An appreciation that everyone was be compelled to observe the same restrictions, and couldn’t gain some short-sighted advantage by breaching the rules.

The enforcement elements depended on:

  • An AI-powered “trustable monitoring system” that was able to detect, through pervasive surveillance, any potential violations of the published restrictions
  • Strong international cooperation, by the D40 and others, to isolate and remove resources from any maverick elements, anywhere in the world, that failed to respect these restrictions.

Public acceptance of trustable monitoring accelerated once it was understood that the systems performing the surveillance could, indeed, be trusted; they would not confer any inappropriate advantage on any grouping able to access the data feeds.

The entire system was underpinned by a vibrant programme of research and education (part of a larger educational initiative known as the “Vital Syllabus”), that:

  • Kept updating the “Singularity Principles” system of restrictions and incentives in the light of improved understanding of the risks and solutions
  • Ensured that the importance of these principles was understood both widely and deeply.

Q10: Pick a sector of your choice (education, transport, energy, communication, finance, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed with AI in your world.

For most of human history, religion had played a pivotal role in shaping people’s outlooks and actions. Religion provided narratives about ultimate purposes. It sanctified social structures. It highlighted behaviour said to be exemplary, as demonstrated in the lives of key religious figures. And it deplored other behaviours said to lead to very bad consequences, if not in the present life, then in an assumed afterlife.

Nevertheless, the philosophical justifications for religions had come under increasing challenge in recent times, with the growth of appreciation of a scientific worldview (including evolution by natural selection), the insights from critical analysis of previously venerated scriptures, and a stark awareness of the tensions between different religions in a multi-polar world.

The decline of influence of religion had both good and bad consequences. Greater freedom of thought and action was accompanied by a shrinking of people’s mental horizons. Without the transcendent appeal of a religious worldview, people’s lives often became dominated instead by egotism or consumerism.

The growth of the transhumanist movement in the 2020s provided one counter to these drawbacks. It was not a religion in the strict sense, but its identification of solutions such as “the abolition of aging”, “paradise engineering”, and “technological resurrection” stirred deep inner personal transformations.

These transformations reached a new level thanks to AGI-facilitated encounters with religious founders, inside immersive virtual reality simulations. New hallucinogenic substances provided extra richness to these experiences. The sector formerly known as “religion” therefore experienced an unexpected renewal. Thank AGI!

Q11: What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2022?

In response to the question, “How much longer do you expect to live”, the usual answer is “at least another hundred years”.

This answer reflects a deep love of life: people are glad to be alive and have huge numbers of quests, passions, projects, and personal voyages that they are enjoying or to which they’re looking forward. The answer also reflects the extraordinary observation that, these days, very few people die. That’s true in all sectors of society, and in all countries of the world. Low-cost high-quality medical treatments are widely available, to reverse diseases that were formerly fatal, and to repair biological damage that had accumulated earlier in people’s lives. People not only live longer but become more youthful.

The core ideas behind these treatments had been clear since the mid-2020s. Biological metabolism generates as a by-product of its normal operation an assortment of damage at the cellular and intercellular levels of the body. Biology also contains mechanisms for the repair of such damage, but over time, these repair mechanisms themselves lose vitality. As a result, people manifest various so-called “hallmarks of aging”. However, various interventions involving biotech and nanotech can revitalise these repair mechanisms. Moreover, other interventions can replace entire biological systems, such as organs, with bio-synthetic alternatives that actually work better than the originals.

Such treatments were feared and even resisted for a while, by activists such as the “naturality advocates”, but the evident improvements these treatments enabled soon won over the doubters.

Q12: In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

In a second country of your choice, which rights are better and which rights are worse respected in your world than in 2022, and why/how?

Regarding the famous phrase, “Everyone has the right to life, liberty and security of person”, all three of these fundamental rights are upheld much more fully, around the world, in 2045 than in 2022:

  • “Life” no longer tends to stop around the age of seventy or eighty; even people aged well over one hundred look forward to continuing to enjoy the right to life
  • “Liberty” involves more choices about lifestyles, personal philosophy, morphological freedom (augmentation and variation of the physical body) and sociological freedom (new structures for families, social groupings, and self-determined nations); importantly, these are not just “choices in theory” but are “choices in practice”, since means are available to support these modifications
  • “Security” involves greater protection from hazards such as extreme weather, pandemics, criminal enterprises, infrastructure hacking, and military attacks.

These improvements in the observation of rights are enabled by technologies of abundance, operated within a much-improved political framework.

Obtaining these benefits involved people agreeing to give up various possible actions that would have led to fewer freedoms and rights overall:

  • “Rights” to pollute the environment or to inflict other negative externalities
  • “Rights” to restrict the education of their girl children
  • “Rights” to experiment with technology without a full safety analysis being concluded.

For a while, some countries like China provided their citizens with only a sham democracy, fearing an irresponsible exercise of that freedom. But by the mid-2030s, that fear had dissipated, and people in all countries gained fuller participatory rights in governance and lifestyle decisions.

Q13: What’s been a notable trend in the way that people are finding fulfilment?

For most of history, right up to the late 2020s, many people viewed themselves through the prism of their occupation or career. “I’m a usability designer”, they might have said. Or “I’m a data scientist” or “I’m a tour guide”, and so on. Their assessment of their own value was closely linked to the financial rewards they obtained from being an employee.

However, as AI became more capable of undertaking all aspects of what had previously been people’s jobs – including portions involving not only diligence and dexterity but also creativity and compassion – there was a significant decline in the proportion of overall human effort invested in employment. By the late 2030s, most people had stopped looking for paid employment, and were content to receive “universal citizens’ dividend” benefits from the operation of sophisticated automated production facilities.

Instead, more and more people found fulfilment by pursuing any of an increasing number of quests and passions. These included both solitary and collaborative explorations in music, art, mathematics, literature, and sport, as well as voyages in parts of the real world and in myriads of fascinating shared online worlds. In all these projects, people found fulfilment, not by performing better than an AI (which would be impossible), but by improving on their own previous achievements, or in friendly competition with acquaintances.

Careful prompting by the AGI helps to maintain people’s interest levels and a sense of ongoing challenge and achievement. AGI has proven to be a wonderful coach.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

%d bloggers like this: