dw2

15 May 2022

Timeline to 2045: questions answered

This is a follow-up to my previous post, containing more of the material that I submitted around five weeks ago to the FLI World Building competition. In this case, the requirement was to answer 13 questions, with answers limited to 250 words in each case.

Q1: AGI has existed for years, but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?

The Global AGI safety project was one of the most momentous and challenging in human history.

The centrepiece of that project was the set of “Singularity Principles” that had first appeared in print in the book Vital Foresight in 2021, and which were developed in additional publications in subsequent years – a set of recommendations with the declared goal of increasing the likelihood that oncoming disruptive technological changes would have outcomes that are profoundly positive for humanity, rather than deeply detrimental. The principles split into four sections:

  1. A focus, in advance, on the goals and outcomes that were being sought from particular technologies
  2. Analysis of the intrinsic characteristics that are desirable in technological solutions
  3. Analysis of methods to ensure that development takes place responsibly
  4. And a meta-analysis – principles about how this overall set of recommendations could itself evolve further over time, and principles for how to increase the likelihood that these recommendations would be applied in practice rather than simply being some kind of wishful thinking.

What drove increasing support for these principles was a growing awareness, shared around the world, of the risks of cataclysmic outcomes that could arise all too easily from increasingly powerful AI, even when everyone involved had good intentions. This shared sense of danger caused even profound ideological enemies to gather together on a regular basis to review joint progress toward fulfilment of the Singularity Principles, as well as to evolve and refine these Principles.

Q2: The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?

One of the key principles programmed into every advanced AI, from the late 2020s onward, was that no AI should seize or manipulate resources owned by any other AI. Instead, AIs should operate only with resources that have been explicitly provided to them. That prevented any hostile takeover of less capable AIs by more powerful competitors. Accordingly, a community of different AIs coexisted, with differing styles and capabilities.

However, in parallel, the various AIs naturally started to interact with each other, offering services to each other in response to expressions of need. The outcome of this interaction was a blurring of the boundaries between different AIs. Thus, by the 2040s, it was no longer meaningful to distinguish between what had originally been separate pieces of software. Instead of referring to “the Alphabet AGI” or “the Tencent AGI”, and so on, people just talked about “the AGI” or even “AGI”.

The resulting AGI was, however, put to different purposes in different parts of the world, dependent on the policies pursued by the local political leaders.

Q3: How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?

The 2020s were a decade of turbulence, in which a number of arms races proceeded at pace, and when conflict several times came close to spilling over from being latent and implied (“cold”) to being active (“hot”):

  • The great cyber war of 2024 between Iran and Israel
  • Turmoil inside many countries in 2026, associated with the fall from power of the president of Russia
  • Exchanges of small numbers of missiles between North and South Korea in 2027
  • An intense cyber battle in 2028 over the future of an independent Taiwan.

These conflicts resulted in a renewed “never again” global focus to avoid any future recurrences. A new generation of political leaders resolved that, regardless of their many differences, they would put particular kinds of weapons beyond use.

Key to this “never again” commitment was an agreement on “global AI monitoring” – the use of independent narrow AIs to monitor all developments and deployments of potential weapons of mass destruction. That agreement took inspiration from previous international agreements that instituted regular independent monitoring of chemical and biological weapons.

Initial public distrust of the associated global surveillance systems was overcome, in stages, by demonstrations of the inherently trustworthy nature of the software used in these systems – software that adapted various counterintuitive but profound cryptographic ideas from the blockchain discussions of the early and mid-2020s.

Q4: In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that, if at all?

Between 2024 and 2032, the US switched its politics from a troubled bipolar system, with Republicans and Democrats battling each other with intense hostility, into a multi-party system, with a dynamic fluidity of new electoral groupings. The winner of the 2032 election was, for the first time since the 1850s, from neither of the formerly dominant parties. What enabled this transition was the adoption, in stages, of ranked choice voting, in which electors could indicate a sequence of which candidates they preferred. This change enabled electors to express interest in new parties without fearing their votes would be “wasted” or would inadvertently allow the election of particularly detested candidates.

The EU led the way in adoption of a “house of AI” as a reviewing body for proposed legislation. Legislation proposed by human politicians was examined by AI, resulting in suggested amendments, along with detailed explanations from the AI of reasons for making these changes. The EU left the ultimate decisions – whether or not to accept the suggestions – in the hands of human politicians. Over time, AI judgements were accepted on more and more occasions, but never uncritically.

China remained apprehensive until the mid-2030s about adopting multi-party politics with full tolerance of dissenting opinions. This apprehension was rooted in historic distrust of the apparent anarchy and dysfunction of politicians who needed to win approval of seemingly fickle electors. However, as AI evidently improved the calibre of online public discussion, with its real-time fact-checking, the Chinese system embraced fuller democratic reforms.

Q5: Is the global distribution of wealth (as measured say by national or international Gini coefficients) more, or less, unequal than 2022’s, and by how much? How did it get that way?

The global distribution of wealth became more unequal during the 2020s before becoming less unequal during the 2030s.

Various factors contributed to inequality increasing:

  • “Winner takes all”: Companies offering second-best products were unable to survive in the marketplace. Swift flows of both information and goods meant that all customers knew about better products and could easily purchase them
  • Financial rewards from the successes of companies increasingly flowed to the owners of the capital deployed, rather than to the people supplying skills and services. That’s because more of the skills and services could be supplied by automation, driving down the salaries that could be claimed by people who were offering the same skills and services
  • The factors that made some products better than others increasingly involved technological platforms, such as the latest AI systems, that were owned by a very small number of companies
  • Companies were able to restructure themselves ingeniously in order to take advantage of tax loopholes and special deals offered by countries desperate for at least some tax revenue.

What caused these trends to reverse was, in short, better politics:

  • Smart collaboration between the national governments of the world, avoiding tax loopholes
  • Recognition by greater numbers of voters of the profound merits of greater redistribution of the fruits of the remarkable abundance of NBIC technologies, as the percentage of people in work declined, and as the problems were more fully recognised of parts of society being “left behind”.

Q6: What is a major problem that AI has solved in your world, and how did it do so?

AI made many key contributions toward the solution of climate change:

  • By enabling more realistic and complete models of all aspects of the climate, including potential tipping points ahead of major climate phase transitions
  • By improving the design of alternative energy sources, including ground-based geothermal, high-altitude winds, ocean-based waves, space-based solar, and several different types of nuclear energy
  • Very significantly, by accelerating designs of commercially meaningful nuclear fusion
  • By identifying the types of “negative emissions technologies” that had the potential to scale up quickly in effectiveness
  • By accelerating the adoption of improved “cultivated meat” as sources of food that had many advantages over methods of animal-based agriculture, namely, addressing issues with land use, water use, antibiotics use, and greenhouse gas emissions, and putting an end to the vile practice of the mass slaughter of sentient creatures
  • By assisting the design of new types of cement, glass, plastics, fertilisers, and other materials whose manufacture had previously caused large emissions of greenhouse gases
  • By recommending the sorts of marketing messages that were most effective in changing the minds of previous opponents of effective action.

To be clear, AI did this as part of “NBIC convergence”, in which there are mutual positive feedback loops between progress in each of nanotech, biotech, infotech, and cognotech.

Q7: What is a new social institution that has played an important role in the development of your world?

The G7 group of the democratic countries with the largest economies transitioned in 2023 into the D16, with a sharper commitment than before to championing the core values of democracy: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

The D16 was envisioned from the beginning as intended to grow in size, to become a global complement to the functioning of the United Nations, able to operate in circumstances that would have resulted in a veto at the UN from countries that paid only lip service to democracy.

One of the first projects of the D16 was to revise the Universal Declaration of Human Rights from the form initially approved by the United Nations General Assembly in 1948, to take account of the opportunities and threats from new technologies, including what are known as “transhuman rights”.

In parallel, another project reached agreement on how to measure an “Index of Human Flourishing”, that could replace the economic measure GDP (Gross Domestic Product) as the de-facto principal indication of wellbeing of societies.

The group formally became the D40 in 2030 and the D90 in 2034. By that time, the D90 was central to agreements to vigorously impose an updated version of the Singularity Principles. Any group anywhere in the world – inside or outside the D90 – that sought to work around these principles, was effectively shut down due to strict economic sanctions.

Q8: What is a new non-AI technology that has played an important role in the development of your world?

Numerous fields have been transformed by atomically precise manufacturing, involving synthetic nanoscale assembly factories. These had been envisioned in various ways by Richard Feynman in 1959 and Eric Drexler in 1986, but did not become commercially viable until the early 2030s.

It had long been recognised that an “existence proof” for nanotechnology was furnished by the operation of ribosomes inside biological cells, with their systematic assembly of proteins from genetic instructions. However, creation of comparable synthetic systems needed to wait for assistance in both design and initial assembly from increasingly sophisticated AI. (DeepMind’s AlphaFold software had given an early indication of these possibilities back in 2021.) Once the process had started, significant self-improvement loops soon accelerated, with each new generation of nanotechnology assisting in the creation of a subsequent better generation.

The benefits flowed both ways: nanotech precision allowed breakthroughs in the manufacture of new types of computer hardware, including quantum computers; these in turn supported better types of AI algorithms.

Nanotech had dramatic positive impact on practices in the production of food, accommodation, clothing, and all sorts of consumer goods. Three areas particularly deserve mention:

  • Precise medical interventions, to repair damage to biological systems
  • Systems to repair damage to the environment as a whole, via a mixture of recycling and regeneration, as well as “negative emissions technologies” operating in the atmosphere
  • Clean energy sources operating at ever larger scale, including atomic-powered batteries

Q9: What changes to the way countries govern the development and/or deployment and/or use of emerging technologies (including AI), if any, played an important role in the development of your world?

Effective governance of emerging technologies involved both voluntary cooperation and enforced cooperation.

Voluntary cooperation – a desire to avoid actions that could lead to terrible outcomes – depended in turn on:

  • An awareness of the risk pathways – similar to the way that Carl Sagan and his colleagues vividly brought to the attention of world leaders in the early 1980s the potential global catastrophe of “nuclear winter”
  • An understanding that the restrictions being accepted would not hinder the development of truly beneficial products
  • An appreciation that everyone was be compelled to observe the same restrictions, and couldn’t gain some short-sighted advantage by breaching the rules.

The enforcement elements depended on:

  • An AI-powered “trustable monitoring system” that was able to detect, through pervasive surveillance, any potential violations of the published restrictions
  • Strong international cooperation, by the D40 and others, to isolate and remove resources from any maverick elements, anywhere in the world, that failed to respect these restrictions.

Public acceptance of trustable monitoring accelerated once it was understood that the systems performing the surveillance could, indeed, be trusted; they would not confer any inappropriate advantage on any grouping able to access the data feeds.

The entire system was underpinned by a vibrant programme of research and education (part of a larger educational initiative known as the “Vital Syllabus”), that:

  • Kept updating the “Singularity Principles” system of restrictions and incentives in the light of improved understanding of the risks and solutions
  • Ensured that the importance of these principles was understood both widely and deeply.

Q10: Pick a sector of your choice (education, transport, energy, communication, finance, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed with AI in your world.

For most of human history, religion had played a pivotal role in shaping people’s outlooks and actions. Religion provided narratives about ultimate purposes. It sanctified social structures. It highlighted behaviour said to be exemplary, as demonstrated in the lives of key religious figures. And it deplored other behaviours said to lead to very bad consequences, if not in the present life, then in an assumed afterlife.

Nevertheless, the philosophical justifications for religions had come under increasing challenge in recent times, with the growth of appreciation of a scientific worldview (including evolution by natural selection), the insights from critical analysis of previously venerated scriptures, and a stark awareness of the tensions between different religions in a multi-polar world.

The decline of influence of religion had both good and bad consequences. Greater freedom of thought and action was accompanied by a shrinking of people’s mental horizons. Without the transcendent appeal of a religious worldview, people’s lives often became dominated instead by egotism or consumerism.

The growth of the transhumanist movement in the 2020s provided one counter to these drawbacks. It was not a religion in the strict sense, but its identification of solutions such as “the abolition of aging”, “paradise engineering”, and “technological resurrection” stirred deep inner personal transformations.

These transformations reached a new level thanks to AGI-facilitated encounters with religious founders, inside immersive virtual reality simulations. New hallucinogenic substances provided extra richness to these experiences. The sector formerly known as “religion” therefore experienced an unexpected renewal. Thank AGI!

Q11: What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2022?

In response to the question, “How much longer do you expect to live”, the usual answer is “at least another hundred years”.

This answer reflects a deep love of life: people are glad to be alive and have huge numbers of quests, passions, projects, and personal voyages that they are enjoying or to which they’re looking forward. The answer also reflects the extraordinary observation that, these days, very few people die. That’s true in all sectors of society, and in all countries of the world. Low-cost high-quality medical treatments are widely available, to reverse diseases that were formerly fatal, and to repair biological damage that had accumulated earlier in people’s lives. People not only live longer but become more youthful.

The core ideas behind these treatments had been clear since the mid-2020s. Biological metabolism generates as a by-product of its normal operation an assortment of damage at the cellular and intercellular levels of the body. Biology also contains mechanisms for the repair of such damage, but over time, these repair mechanisms themselves lose vitality. As a result, people manifest various so-called “hallmarks of aging”. However, various interventions involving biotech and nanotech can revitalise these repair mechanisms. Moreover, other interventions can replace entire biological systems, such as organs, with bio-synthetic alternatives that actually work better than the originals.

Such treatments were feared and even resisted for a while, by activists such as the “naturality advocates”, but the evident improvements these treatments enabled soon won over the doubters.

Q12: In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

In a second country of your choice, which rights are better and which rights are worse respected in your world than in 2022, and why/how?

Regarding the famous phrase, “Everyone has the right to life, liberty and security of person”, all three of these fundamental rights are upheld much more fully, around the world, in 2045 than in 2022:

  • “Life” no longer tends to stop around the age of seventy or eighty; even people aged well over one hundred look forward to continuing to enjoy the right to life
  • “Liberty” involves more choices about lifestyles, personal philosophy, morphological freedom (augmentation and variation of the physical body) and sociological freedom (new structures for families, social groupings, and self-determined nations); importantly, these are not just “choices in theory” but are “choices in practice”, since means are available to support these modifications
  • “Security” involves greater protection from hazards such as extreme weather, pandemics, criminal enterprises, infrastructure hacking, and military attacks.

These improvements in the observation of rights are enabled by technologies of abundance, operated within a much-improved political framework.

Obtaining these benefits involved people agreeing to give up various possible actions that would have led to fewer freedoms and rights overall:

  • “Rights” to pollute the environment or to inflict other negative externalities
  • “Rights” to restrict the education of their girl children
  • “Rights” to experiment with technology without a full safety analysis being concluded.

For a while, some countries like China provided their citizens with only a sham democracy, fearing an irresponsible exercise of that freedom. But by the mid-2030s, that fear had dissipated, and people in all countries gained fuller participatory rights in governance and lifestyle decisions.

Q13: What’s been a notable trend in the way that people are finding fulfilment?

For most of history, right up to the late 2020s, many people viewed themselves through the prism of their occupation or career. “I’m a usability designer”, they might have said. Or “I’m a data scientist” or “I’m a tour guide”, and so on. Their assessment of their own value was closely linked to the financial rewards they obtained from being an employee.

However, as AI became more capable of undertaking all aspects of what had previously been people’s jobs – including portions involving not only diligence and dexterity but also creativity and compassion – there was a significant decline in the proportion of overall human effort invested in employment. By the late 2030s, most people had stopped looking for paid employment, and were content to receive “universal citizens’ dividend” benefits from the operation of sophisticated automated production facilities.

Instead, more and more people found fulfilment by pursuing any of an increasing number of quests and passions. These included both solitary and collaborative explorations in music, art, mathematics, literature, and sport, as well as voyages in parts of the real world and in myriads of fascinating shared online worlds. In all these projects, people found fulfilment, not by performing better than an AI (which would be impossible), but by improving on their own previous achievements, or in friendly competition with acquaintances.

Careful prompting by the AGI helps to maintain people’s interest levels and a sense of ongoing challenge and achievement. AGI has proven to be a wonderful coach.

A year-by-year timeline to 2045

The ground rules for the worldbuilding competition were attractive:

  • The year is 2045.
  • AGI has existed for at least 5 years.
  • Technology is advancing rapidly and AI is transforming the world sector by sector.
  • The US, EU and China have managed a steady, if uneasy, power equilibrium.
  • India, Africa and South America are quickly on the ride as major players.
  • Despite ongoing challenges, there have been no major wars or other global catastrophes.
  • The world is not dystopian and the future is looking bright.

Entrants were asked to submit four pieces of work. One was a new media piece. I submitted this video:

Another required piece was:

timeline with entries for each year between 2022 and 2045 giving at least two events (e.g. “X invented”) and one data point (e.g. “GDP rises by 25%”) for each year.

The timeline I created dovetailed with the framework from the above video. Since I enjoyed creating it, I’m sharing my submission here, in the hope that it may inspire readers.

(Note: the content was submitted on 11th April 2022.)

2022

US mid-term elections result in log-jammed US governance, widespread frustration, and a groundswell desire for more constructive approaches to politics.

The collapse of a major crypto “stablecoin” results in much wider adverse repercussions than was generally expected, and a new social appreciation of the dangers of flawed financial systems.

Data point: Number of people killed in violent incidents (including homicides and armed conflicts) around the world: 590,000

2023

Fake news that is spread by social media driven by a new variant of AI provokes riots in which more than 10,000 people die, leading to much greater interest a set of “Singularity Principles” that had previously been proposed to steer the development of potentially world-transforming technologies.

G7 transforms into the D16, consisting of the world’s 16 leading democracies, proclaiming a profound shared commitment to champion norms of: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 6.4%

2024

South Korea starts a trial of a nationwide UBI scheme, in the first of what will become in later years a long line of increasingly robust “universal citizens’ dividends” schemes around the world.

A previously unknown offshoot of ISIS releases a bioengineered virus. Fortunately, vaccines are quickly developed and deployed against it. In parallel, a bitter cyber war takes place between Iran and Israel. These incidents lead to international commitments to prevent future recurrences.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 38%

2025

Extreme weather – floods and storms – kills 10s of 1000s in both North America and Europe. A major trial of geo-engineering is rushed through, with reflection of solar radiation in the stratosphere – causing global political disagreement and then a renewed determination for tangible shared action on climate change.

The US President appoints a Secretary for the Future as a top-level cabinet position. More US states adopt rank choice voting, allowing third parties to grow in prominence.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 38%

2026

A song created entirely by an AI tops the hit parade, and initiates a radical new musical genre.

Groundswell opposition to autocratic rule in Russia leads to the fall from power of the president and a new dedication to democracy throughout countries formerly perceived as being within Russia’s sphere of direct influence.

Data point: Net greenhouse gas emissions (including those from land-use changes): 59 billion tons of CO2 equivalent – an unwelcome record.

2027

Metformin approved for use as an anti-aging medicine in a D16 country. Another D16 country recommends nationwide regular usage of a new nootropic drug.

Exchanges of small numbers of missiles between North and South Korea leads to regime change inside North Korea and a rapprochement between the long-bitter enemies.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 9.2%

2028

An innovative nuclear fusion system, with its design assisted by AI, runs for more than one hour and generates significantly more energy out than what had been put in.

As a result of disagreements about the future of an independent Taiwan, an intense destructive cyber battle takes place. At the end, the nations of the world commit more seriously than before to avoiding any future cyber battles.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 41%

2029

A trial of an anti-aging intervention in middle-aged dogs is confirmed to have increased remaining life expectancy by 25% without causing any adverse side effects. Public interest in similar interventions in humans skyrockets.

The UK rejoins a reconfigured EU, as an indication of support for sovereignty that is pooled rather than narrow.

Data point: Proportion of world population with formal cryonics arrangements: 1 in 100,000

2030

Russia is admitted into the D40 – a newly expanded version of the D16. The D40 officially adopts “Index of Human Flourishing” as more important metric than GDP, and agrees a revised version of the Universal Declaration of Human Rights, brought up to date with transhuman issues.

First permanent implant in a human of an artificial heart with a new design that draws all required power from the biology of the body rather than any attached battery, and whose pace of operation is under the control of the brain.

Data point: Net greenhouse gas emissions (including those from land-use changes): 47 billion tons of CO2 equivalent – a significant improvement

2031

An AI discovers and explains a profound new way of looking at mathematics, DeepMath, leading in turn to dramatically successful new theories of fundamental physics.

Widespread use of dynamically re-programmed nanobots to treat medical conditions that would previously have been fatal.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 23%

2032

First person reaches the age of 125. Her birthday celebrations are briefly disrupted by a small group of self-described “naturality advocates” who chant “120 is enough for anyone”, but that group has little public support.

D40 countries put in place a widespread “trustable monitoring system” to cut down on existential risks (such as spread of WMDs) whilst maintaining citizens’ trust.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 35.7% 

2033

For the first time since the 1850s, the US President comes from a party other than Republican and Democratic.

An AI system is able to convincingly pass the Turing test, impressing even the previous staunchest critics with its apparent grasp of general knowledge and common sense. The answers it gives to questions of moral dilemmas also impress previous sceptics.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 58%

2034

The D90 (expanded from the D40) agrees to vigorously impose Singularity Principles rules to avoid inadvertent creation of dangerous AGI.

Atomically precise synthetic nanoscale assembly factories have come of age, in line with the decades-old vision of nanotechnology visionary Eric Drexler, and are proving to have just as consequential an impact on human society as AI.

Data point: Net greenhouse gas *removals*: 10 billion tons of CO2 equivalent – a dramatic improvement

2035

A novel written entirely by an AI reaches the top of the New York Times bestseller list, and is widely celebrated as being the finest piece of literature ever produced.

Successful measures to remove greenhouse gases from the atmosphere, coupled with wide deployment of clean energy sources, lead to a declaration of “victory over runaway climate change”.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 4%

2036

A film created entirely by an AI, without any real human actors, wins Oscar awards.

The last major sceptical holdout, a philosophy professor from an Ivy League university, accepts that AGI now exists. The pope gives his blessing too.

Data point: Proportion of world population with cryonics arrangements: 24%

2037

The last instances of the industrial scale slaughter of animals for human consumption, on account of the worldwide adoption of cultivated (lab-grown) meat.

AGI convincingly explains that it is not sentient, and that it has a very different fundamental structure from that of biological consciousness.

Data point: Proportion of world population who are literate: 99.3%

2038

Rejuvenation therapies are in wide use around the world. “Eighty is the new fifty”. First person reaches the age of 130.

Improvements made by AGI upon itself effectively raise its IQ one hundred fold, taking it far beyond the comprehension of human observers. However, the AGI provides explanatory educational material that allows people to understand vast new sets of ideas.

Data point: Proportion of world population who consider themselves opposed to AGI: 0.1%

2039

An extensive set of “vital training” sessions has been established by the AGI, with all citizens over the age of ten participating for a minimum of seven hours per day on 72 days each year, to ensure that humans develop and maintain key survival skills.

Menopause reversal is common place. Women who had long ago given up any ideas of bearing another child happily embrace motherhood again.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 99.2%

2040

The use of “mind phones” is widespread: new brain-computer interfaces that allow communication between people by mental thought alone.

People regularly opt to have several of their original biological organs replaced by synthetic alternatives that are more efficient, more durable, and more reliable.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 96%

2041

Shared immersive virtual reality experiences include hyper-realistic simulations of long-dead individuals – including musicians, politicians, royalty, saints, and founders of religions.

The number of miles of journey undertaken by small “flying cars” exceeds that of ground-based powered transport.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 100.0%

2042

First successful revival of mammal from cryopreservation.

AGI presents a proof of the possibility of time travel, but the resources required for safe transit of humans through time would require the equivalent of building a Dyson sphere around the sun.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 0.4%

2043

First person reaches the age of 135, and declares herself to be healthier than at any time in the preceding four decades.

As a result of virtual reality encounters of avatars of founders of religion, a number of new systems of philosophical and mystical thinking grow in popularity.

Data point: Proportion of world’s energy provided by earth-based nuclear fusion: 75%

2044

First human baby born from an ectogenetic pregnancy.

Family holidays on the Moon are an increasingly common occurrence.

Data point: Average amount of their waking time that people spend in a metaverse: 38%

2045

First revival of human from cryopreservation – someone who had been cryopreserved ten years previously.

Subtle messages decoded by AGI from far distant stars in the galaxy confirm that other intelligent civilisations exist, and are on their way to reveal themselves to humanity.

Data point: Number of people killed in violent incidents around the world: 59

Postscript

My thanks go to the competition organisers, the Future of Life Institute, for providing the inspiration for the creation of the above timeline.

Readers are likely to have questions in their minds as they browse the timeline above. More details of the reasoning behind the scenarios involved are contained in three follow-up posts:

1 October 2019

“Lifespan” – a book to accelerate the emerging paradigm change in healthcare

Harvard Medical School professor David Sinclair has written a remarkable book that will do for an emerging new paradigm in healthcare what a similarly remarkable book by Oxford University professor Nick Bostrom has been doing for an emerging new paradigm in artificial intelligence.

In both cases, the books act to significantly increase the tempo of the adoption of the new paradigm.

Bostrom’s book, Superintelligence – subtitled Paths, Dangers, Strategies – caught the attention of Stephen Hawking, Bill Gates, Elon Musk, Barack Obama, and many more, who have collectively amplified its message. That message is the need to dramatically increase the priority of research into the safety of systems that contain AGI (artificial general intelligence). AGI will be a significant step up in capability from today’s “narrow” AI (which includes deep learning as well as “good old fashioned” expert systems), and therefore requires a significant step up in capability of safety engineering. In the wake of a wider appreciation of the scale of the threat (and, yes, the opportunity) ahead, funding has been provided for important initiatives such as the Future of Life Institute, OpenAI, and Partnership on AI. Thank goodness!

Sinclair’s book, Lifespan – subtitled Why We Age, and Why We Don’t Have To – is poised to be read, understood, and amplified by a similar group of key influencers of public thinking. In this case, the message is that a transformation is at hand in how we think about illness and health. Rather than a “disease first” approach, what is now possible – and much more desirable – is an “aging first” approach that views aging as the treatable root cause of numerous diseases. In the wake of a wider appreciation of the scale of the opportunity ahead (and, yes, the threat to society if healthcare continues along its current outdated disease-first trajectory), funding is likely to be provided to accelerate research into the aging-first paradigm. Thank goodness!

Bostom’s book drew upon the ideas of earlier writers, including Eliezer Yudkowsky and Ray Kurzweil. It also embodied decades of Bostrom’s own thinking and research into the field.

Sinclair’s book likewise builds upon ideas of earlier writers, including Aubrey de Grey and (again) Ray Kurzweil. Again, it also embodies decades of Sinclair’s own thinking and research into the field.

Both books are occasionally heavy going for the general reader – especially for a general reader who is in a hurry. But both take care to explain their thinking in a step-by-step process. Both contain many human elements in their narrative. Neither books contain the last word on their subject matter – and, indeed, parts will likely prove to be incorrect in the fullness of time. But both perform giant steps forwards for the paradigms they support.

The above remarks about the book Lifespan are part of what I’ll be talking about later today, in Brussels, at an open lunch event to mark the start of this year’s Longevity Month.

Longevity Month is an opportunity to celebrate recent progress, and to anticipate faster progress ahead, for the paradigm shift mentioned above:

  • Rather than studying each chronic disease separately, science should prioritise study of aging as the common underlying cause (and aggravator) of numerous chronic diseases
  • Rather than treating aging as an unalterable “fact of nature” (which, by the way, it isn’t), we should regard aging as an engineering problem which is awaiting an engineering solution.

In my remarks at this event, I’ll also be sharing my overall understanding of how paradigm shifts take place (and the opposition they face):

I’ll run through a simple explanation of the ideas behind the “aging-first” paradigm – a paradigm of regular medical interventions to repair or remove the damage caused at cellular and inter-cellular levels as a by-product of normal human metabolism:

Finally, I’ll be summarising the growing momentum of progress in a number of areas, and suggesting how that momentum has the potential to address the key remaining questions in the field:

In addition to me, four other speakers are scheduled to take part in today’s event:

It should be a great occasion!

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

Blog at WordPress.com.