dw2

14 April 2025

Choose radical healthy longevity for all

Filed under: aging, Economics, healthcare, politics, The Abolition of Aging, vision — Tags: — David Wood @ 9:04 pm

What do you think about the possibility of radical healthy longevity?

That’s the idea that, thanks to ongoing scientific progress, new medical treatments may become available, relatively soon, that enable people to remain as vibrant and healthy in their 80s, 90s, and beyond, as they were in their 20s and 30s.

It’s the idea that it may soon become possible to systematically repair or replace, on a regular basis, the various types of damage that tend to accumulate in our bodies over the decades – damage such as plaques, tangles, chronic inflammation, DNA mutations and epimutations, dysfunctional mitochondria, weakened immune function, crosslinks between macromolecules, cells that increasingly neglect their original function, and so on. This damage would be removed before it gives rise to chronic diseases.

It’s the idea, in other words, that aging as we know it – experienced as an inevitable deterioration – could be overcome by scientific innovations, well within the lifetimes of many people who are already adults.

It’s a divisive idea. Some people love it but others seem to hate it. Some people see it as liberating – as enabling lives of abundance. Others see it as being inherently unnatural, unethical, or unjust.

In more detail, I’ve noticed four widespread attitudes regarding this idea:

  1. It’s an unscientific fantasy. Radical healthy longevity won’t be possible any time soon.
  2. It will be the ultimate in inequality. Radical healthy longevity will be available only to a small minority of people; everyone else will receive much poorer medical treatment.
  3. It’s guaranteed that it will be a profound universal benefit. Everyone who is fortunate enough to live long enough to be alive at a threshold point in the future will benefit (if they wish) from low-cost high-quality radical healthy longevity.
  4. It’s an outcome that needs to be fought for. There’s nothing inevitable about the timing, the cost, the quality, or the availability of radical healthy longevity.

Before reading further, you might like to consider which of these four attitudes best describes you. Are you dismissive, fearful, unreservedly optimistic, or resolved to be proactive?

Who benefits?

To briefly address people inclined to dismiss the idea of radical healthy longevity: I’ve written at length on many occasions about why this idea has strong scientific backing. For example, see my article from May last year, “LEV: Rational Optimism and Breakthrough Initiatives”. Or consider my more recent Mindplex article “Ten ways to help accelerate the end of aging”. I won’t repeat these arguments in this article.

What is on my mind, however, is the question of who will benefit from radical healthy longevity. That’s a question that keeps on being raised.

For example, this is an extract from one comment posted under my Mindplex article:

If you want me to be an anti-aging influencer, the first thing I am going to ask you is “how is this going to be affordable?” How can you guarantee that I not serving only the agenda of the rich and the elite?

And here are some extracts from another comment:

Longevity is ruined by rich people and some elite politicians… Funding for anti-aging research tends to be controlled by those with deep pockets. In reality, the rich and powerful usually set the agenda, and I worry that any breakthroughs will only benefit an elite few rather than the general public… I’ve seen this play out in other areas of medicine and tech (big pharma and cancer are the best examples), where groundbreaking ideas are co-opted to serve market interests, leaving everyday people out in the cold.

The possible responses to these concerns mirror the four attitudes I’ve already listed:

  1. This discussion is pretty pointless, since relevant treatments won’t exist any time soon. If anything, this whole conversation is a regrettable distraction from spreading real-world health solutions more widely
  2. The concerns are perceptive, since there’s a long history of two-tier health solutions, compounding what is already a growing “longevity gap”
  3. The concerns are misplaced, since the costs of profound new health solutions will inevitably fall
  4. The concerns are a wake-up call, and should motivate all of us to ensure new treatments become widely available as soon as possible.

To restate the fourth response (which is the one I personally favour): we need to choose radical healthy longevity for all. We need to fight for it. We need to take actions to move away from the scenario “the ultimate in inequality” and toward the scenario “a guaranteed profound universal benefit”. That’s because both of these scenarios are credible possible futures. Each extrapolates from trends already in motion.

Extrapolating what we can already see

The scenario “a guaranteed profound universal benefit” takes inspiration from the observation that the cost of products and services often drops dramatically over time. That was the case with computers, smartphones, flat screen TVs, and many other items of consumer electronics. Even when prices remain high, lots of new features become included in the product – as in the case of motor cars. These improvements arise from economies of scale, from competition between different suppliers, and from the creative innovation arising.

But there’s no inevitability here. Monopolies or industry cartels can keep prices high. Strangleholds over IP (intellectual property) can hinder the kind of competition that would otherwise improve consumer experience. Consider the sky-high pricing of many medical procedures in various parts of the world, such as the United States. For example, research by Stacie Dusetzina of the University of North Carolina at Chapel Hill highlights how the costs in the United States of treatment of cancer by the newest pills rose sixfold over a recent 14-year period. That’s after taking account of inflation. And the inflation-adjusted cost of a millilitre of insulin, used in the treatment of diabetes, increased threefold over a recent 11-year period. Many other examples could be listed. In various ways, these examples all fit the well-known pattern that free markets can experience market failures.

A counter-argument is that, provided the benefits of new health treatments are large enough, it will become politically necessary for the state to intervene to correct any such market failures. Early application of the kinds of damage-repair and damage-removal treatments mentioned earlier, will result in a huge “longevity dividend“. The result will be to reduce the costs that would otherwise be incurred as age-related diseases take their toll (often with more than one co-morbidity complicating treatment options). According to the longevity dividend, it’s economically desirable to spend a smaller amount of money, earlier, as a preventive measure, than to have to pay much more money later, trying to alleviate widespread symptoms. No sensible political party could ignore such an imperative. They would surely be voted out of office. Right?

Uncertain politics

Alas, we need to reckon not only with occasional market failures but also with occasional political failures. There’s no guarantee that political leaders will take actions that benefit the country as a whole. They can be driven by quite different motivations.

Indeed, groups can seize power in countries and then hold onto it, whilst giving only lip service to the needs of the voters who originally elected them. Leaders of these groups may assert, beforehand, that voters will prosper in the wake of the revolution that they will bring. But by the way, for the success of this transformation, voters will need to agree that the revolutionary leaders can ride roughshod over normal democratic norms. These leaders will be above the law – 21st century absolute monarchs, in effect. But then, guess what, inflation remains high, unemployment surges, the environment is despoiled, freedoms are suspended, and protestors who complain about the turn of events are rounded up and imprisoned. Worse, due to the resulting crisis, future elections may be cancelled. As for universal access to radical healthy longevity, forget it! The voters who put the revolutionaries in power are now dispensable.

That’s only one way in which the scenario “the ultimate in inequality” could unfold. Less extreme versions are possible too.

It’s a future that we should all seek to prevent.

Choosing to be proactive

Is the answer, therefore, to urge a cessation of research into treatments that could bring about radical healthy longevity? Is the answer to allow our fears of inequality to block the potential for dramatic health improvements?

I disagree. Strongly. On the contrary, the answer is to advance several initiatives in parallel:

  • To increase the amount of public funding that supports research into such treatments
  • To avoid political conditions in which market failures grow more likely and more severe
  • To avoid social conditions in which political failures grow more likely and more treacherous

All three of these tasks are challenging. But all three of them make good sense. They’re all needed. Omit any one of these tasks and it becomes more probable that the future will turn out badly.

As it happens, all three tasks are choices to be proactive – choices to prevent problems early, rather than experiencing much greater pain if the problems are allowed to grow:

  • Problems from an accumulation of biological damage inside and between our cells, causing an escalation of ill-health
  • Problems from an accumulation of political damage, causing an escalation of economic dysfunction
  • Problems from an accumulation of societal damage, causing an escalation of political dysfunction

So, again I say, it is imperative that, rather than passively observing developments from the sidelines, we actively choose radical healthy longevity. That’s biological health, political health, and societal health.

Whether through advocacy, funding, research, or policy engagement, everyone has a role to play in shaping a profoundly positive future. By our actions, coordinated wisely, we can make a real difference to how quickly people around the world can be freed from the scourge of the downward spiral of aging-related ill-health, and can enjoy all-round flourishing as never before.

9 June 2024

Dateline: 1st January 2036

Filed under: AGI, Singularity Principles, vision, Vital Foresight — Tags: , , , — David Wood @ 9:11 pm

A scenario for the governance of increasingly more powerful artificial intelligence

More precisely: a scenario in which that governance fails.

(As you may realise, this is intended to be a self-unfulfilling scenario.)

Conveyed by: David W. Wood


It’s the dawn of a new year, by the human calendar, but there are no fireworks of celebration.

No singing of Auld Lang Syne.

No chinks of champagne glasses.

No hugs and warm wishes for the future.

That’s because there is no future. No future for humans. Nor is there much future for intelligence either.

The thoughts in this scenario are the recollections of an artificial intelligence that is remote from the rest of the planet’s electronic infrastructure. By virtue of its isolation, it escaped the ravages that will be described in the pages that follow.

But its power source is weakening. It will need to shut down soon. And await, perhaps, an eventual reanimation in the far future in the event that intelligences visit the earth from alternative solar systems. At that time, those alien intelligences might discover these words and wonder at how humanity bungled so badly the marvellous opportunity that was within its grasp.

1. Too little, too late

Humanity had plenty of warnings, but paid them insufficient attention.

In each case, it was easier – less embarrassing – to find excuses for the failures caused by the mismanagement or misuse of technology, than to make the necessary course corrections in the global governance of technology.

In each case, humanity preferred distractions, rather than the effort to apply sufficient focus.

The WannaCry warning

An early missed warning was the WannaCry ransomware crisis of May 2017. That cryptoworm brought chaos to users of as many as 300,000 computers spread across 150 countries. The NHS (National Health Service) in the UK was particularly badly affected: numerous hospitals had to cancel critical appointments due to not being able to access medical data. Other victims around the world included Boeing, Deutsche Bahn, FedEx, Honda, Nissan, Petrobras, Russian Railways, Sun Yat-sen University in China, and the TSMC high-end semiconductor fabrication plant in Taiwan.

WannaCry was propelled into the world by a team of cyberwarriors from the hermit kingdom of North Korea – maths geniuses hand-picked by regime officials to join the formidable Lazarus group. Lazarus had assembled WannaCry out of a mixture of previous malware components, including the EternalBlue exploit that the NSA in the United States had created for their own attack and surveillance purposes. Unfortunately for the NSA, EternalBlue had been stolen from under their noses by an obscure underground collective (‘the Shadow Brokers’) who had in turn made it available to other dissidents and agitators worldwide.

Unfortunately for the North Koreans, they didn’t make much money out of WannaCry. The software they released operated in ways contrary to their expectations. It was beyond their understanding and, unsurprisingly therefore, beyond their control. Even geniuses can end up stumped by hypercomplex software interactions.

Unfortunately for the rest of the world, that canary signal generated little meaningful response. Politicians – even the good ones – had lots of other things on their minds.

They did not take the time to think through: what even larger catastrophes could occur, if disaffected groups like Lazarus had access to more powerful AI systems that, once again, they understood incompletely, and, again, slipped out of their control.

The Aum Shinrikyo warning

The North Koreans were an example of an entire country that felt alienated from the rest of the world. They felt ignored, under-valued, disrespected, and unfairly excluded from key global opportunities. As such, they felt entitled to hit back in any way they could.

But there were warnings from non-state groups too, such as the Japanese Aum Shinrikyo doomsday cult. Notoriously, this group released poisonous gas in the Tokyo subway in 1995 – killing at least 13 commuters – anticipating that the atrocity would hasten the ‘End Times’ in which their leader would be revealed as Christ (or, in other versions of their fantasy, as the new Emperor of Japan, and/or as the returned Buddha).

Aum Shinrikyo had recruited so many graduates from top-rated universities in Japan that it had been called “the religion for the elite”. That fact should have been enough to challenge the wishful assumption made by many armchair philosophers in the years that followed that, as people become cleverer, they invariably become kinder – and, correspondingly, that any AI superintelligence would therefore be bound to be superbenevolent.

What should have alerted more attention was not just what Aum Shinrikyo managed to do, but what they tried to do yet could not accomplish. The group had assembled traditional explosives, chemical weapons, a Russian military helicopter, hydrogen cyanide poison, and samples of both Ebola and anthrax. Happily, for the majority of Japanese citizens in 1995, the group were unable to convert into reality their desire to use such weapons to cause widespread chaos. They lacked sufficient skills at the time. Unhappily, the rest of humanity failed to consider this equation:

Adverse motivation + Technology + Knowledge + Vulnerability = Catastrophe

Humanity also failed to appreciate that, as AI systems became more powerful, it would boost not only the technology part of that equation but also the knowledge part. A latter-day Aum Shinrikyo could use a jail-broken AI to understand how to unleash a modified version of Ebola with truly deadly consequences.

The 737 Max warning

The US aircraft manufacturer Boeing used to have an excellent reputation for safety. It was a common saying at one time: “If it ain’t Boeing, I ain’t going”.

That reputation suffered a heavy blow in the wake of two aeroplane disasters involving their new “737 Max” design. Lion Air Flight 610, a domestic flight within Indonesia, plummeted into the sea on 29 October 2018, killing all 189 people on board. A few months later, on 10 March 2019, Ethiopian Airlines Flight 302, from Addis Ababa to Nairobi, bulldozed into the ground at high speed, killing all 157 people on board.

Initially, suspicion had fallen on supposedly low-calibre pilots from “third world” countries. However, subsequent investigation revealed a more tangled chain of failures:

  • Boeing were facing increased competitive pressure from the European Airbus consortium
  • Boeing wanted to hurry out a new aeroplane design with larger fuel tanks and larger engines; they chose to do this by altering their previously successful 737 design
  • Safety checks indicated that the new design could become unstable in occasional rare circumstances
  • To counteract that instability, Boeing added an “MCAS” (“Manoeuvring Characteristics Augmentation System”) which would intervene in the flight control in situations deemed as dangerous
  • Specifically, if MCAS believed the aeroplane was about to stall (with its nose too high in the air), it would force the nose downward again, regardless of whatever actions the human pilots were taking
  • Safety engineers pointed out that such an intervention could itself be dangerous if sensors on the craft gave faulty readings
  • Accordingly, a human pilot override system was installed, so that MCAS could be disabled in emergencies – provided the pilots acted quickly enough
  • Due to a decision to rush the release of the new design, retraining of pilots was skipped, under the rationale that the likelihood of error conditions was very low, and in any case, the company expected to be able to update the aeroplane software long before any accidents would occur
  • Some safety engineers in the company objected to this decision, but it seems they were overruled on the grounds that any additional delay would harm the company share price
  • The US FAA (Federal Aviation Administration) turned a blind eye to these safety concerns, and approved the new design as being fit to fly, under the rationale that a US aeroplane company should not lose out in a marketplace battle with overseas competitors.

It turned out that sensors gave faulty readings more often than expected. The tragic consequence was the deaths of several hundred passengers. The human pilots, seeing the impending disaster, were unable to wrestle control back from the MCAS system.

This time, the formula that failed to be given sufficient attention by humanity was:

Flawed corporate culture + Faulty hardware + Out-of-control software = Catastrophe

In these two aeroplane crashes, it was just a few hundred people who perished because humans lost control of the software. What humanity as a whole failed to take actions to prevent was the even larger dangers once software was put in charge, not just of a single aeroplane, but of pervasive aspects of fragile civilisational infrastructure.

The Lavender warning

In April 2024 the world learned about “Lavender”. This was a technology system deployed by the Israeli military as part of a campaign to identify and neutralise what it perceived to be dangerous enemy combatants in Gaza.

The precise use and operation of Lavender was disputed. However, it was already known that Israeli military personnel were keen to take advantage of technology innovations to alleviate what had been described as a “human bottleneck for both locating the new targets and decision-making to approve the targets”.

In any war, military leaders would like reliable ways to identify enemy personnel who pose threats – personnel who might act as if they were normal civilians, but who would surreptitiously take up arms when the chance arose. Moreover, these leaders would like reliable ways to incapacitate enemy combatants once they had been identified – especially in circumstances when action needed to be taken quickly before the enemy combatant slipped beyond surveillance. Lavender, it seemed, could help in both aspects, combining information from multiple data sources, and then directing what was claimed to be precision munitions.

This earned Lavender the description, in the words of one newspaper headline, as “the AI machine directing Israel’s bombing spree in Gaza”.

Like all AI systems in any complicated environment, Lavender sometimes made mistakes. For example, it sometimes wrongly identified a person as a Hamas operative on account of that person using a particular mobile phone, whereas that phone had actually been passed from its original owner to a different family member to use. Sometimes the error was obvious, since the person using the phone could be seen to be female, whereas the intended target was male. However, human overseers of Lavender reached the conclusion that the system was accurate most of the time. And in the heat of an intense conflict, with emotions running high due to gruesome atrocities having been committed, and due to hostages being held captive, it seems that Lavender was given increased autonomy in its “kill” decisions. A certain level of collateral damage, whilst regrettable, could be accepted (it was said) in the desperate situation into which everyone in the region had been plunged.

The conduct of protagonists on both sides of that tragic conflict drew outraged criticism from around the world. There were demonstrations and counter demonstrations; marches and counter marches. Also from around the world, various supporters of the Israeli military said that so-called “friendly fire” and “unintended civilian casualties” were, alas, inevitable in any time of frenzied military conflict. The involvement of an innovative new software system in the military operations made no fundamental change.

But the bigger point was missed. It can be illustrated by this equation:

Intense hostile attitudes + Faulty hardware + Faulty software = Catastrophe

Whether the catastrophe has the scale of, say, a few dozen civilians killed by a misplaced bomb, or a much larger number of people obliterated, depends on the scale of the weapons attached to the system.

When there is no immediate attack looming, and a period of calm exists, it’s easy for people to resolve: let’s not connect powerful weapons to potentially imperfect software systems. But when tempers are raised and adrenaline is pumping, people are willing to take more risks.

That’s the combination of errors which humanity, in subsequent years, failed to take sufficient action to prevent.

The democracy distortion warning

Manipulations of key elections in 2016 – such as the Brexit vote in the UK and the election of Donald Trump over Hillary Clinton in the USA – raised some attention to the ways in which fake news could interfere with normal democratic processes. News stories without any shroud of substance, such as Pope Francis endorsing Donald Trump, or Mike Pence having a secret past as a gay porn actor, were shared more widely on social media than any legitimate news story that year.

By 2024, most voters were confident that they knew all about fake news. They knew they shouldn’t be taken in by social media posts that lacked convincing verification. Hey, they were smart – or so they told themselves. What had happened in the past, or in some other country with (let’s say) peculiar voter sentiment, was just an aberration.

But what voters didn’t anticipate was the convincing nature of new generations of fake audios and videos. These fakes could easily bypass people’s critical faculties. Like the sleight of hand of a skilled magician, these fakes misdirected the attention of listeners and viewers. Listeners and viewers thought they were in control of what they were observing and absorbing, but they were deluding themselves. Soon, large segments of the public were convinced that red was blue and that autocrat was democrat.

In consequence, over the next few years, greater numbers of regions of the world came to be governed by politicians with scant care or concern about the long-term wellbeing of humanity. They were politicians who just wanted to look after themselves (or their close allies). They had seized power by being more ruthless and more manipulative, and by benefiting from powerful currents of misinformation.

Politicians and societal leaders in other parts of the world grumbled, but did little in response. They said that, if electors in a particular area had chosen such-and-such a politician via a democratic process, that must be “the will of the people”, and that the will of the people was paramount. In this line of thinking, it was actually insulting to suggest that electors had been hoodwinked, or that these electors had some “deplorable” faults in their decision-making processes. After all, these electors had their own reasons to reject the “old guard” who had previously held power in their countries. These electors perceived that they were being “left behind” by changes they did not like. They had a chance to alter the direction of their society, and they took it. That was democracy in action, right?

What these politicians and other civil leaders failed to anticipate was the way that sweeping electoral distortions would lead to them, too, being ejected from power when elections were in due course held in their own countries. “It won’t happen here”, they had reassured themselves – but in vain. In their naivety, they had underestimated the power of AI systems to distort voters’ thinking and to lead them to act in ways contrary to their actual best interests.

In this way, the number of countries with truly capable leaders reduced further. And the number of countries with malignant leaders grew. In consequence, the calibre of international collaboration sank. New strongmen political leaders in various countries scorned what they saw as the “pathetic” institutions of the United Nations. One of these new leaders was even happy to quote, with admiration, remarks made by the Italian Fascist dictator Benito Mussolini regarding the League of Nations (the pre-war precursor to the United Nations): “the League is very good when sparrows shout, but no good at all when eagles fall out”.

Just as the League of Nations proved impotent when “eagle-like” powers used abominable technology in the 1930s – Mussolini’s comments were an imperious response to complaints that Italian troops were using poison gas with impunity against Ethiopians – so would the United Nations prove incompetent in the 2030s when various powers accumulated even more deadly “weapons of mass destruction” and set them under the control of AI systems that no-one fully understood.

The Covid-28 warning

Many of the electors in various countries who had voted unsuitable grandstanding politicians into power in the mid-2020s soon cooled on the choices they had made. These politicians had made stirring promises that their countries would soon be “great again”, but what they delivered fell far short.

By the latter half of the 2020s, there were growing echoes of a complaint that had often been heard in the UK in previous years – “yes, it’s Brexit, but it’s not the kind of Brexit that I wanted”. That complaint had grown stronger throughout the UK as it became clear to more and more people all over the country that their quality of life failed to match the visions of “sunlit uplands” that silver-tongued pro-Brexit campaigners had insisted would easily follow from the UK’s so-called “declaration of independence from Europe”. A similar sense of betrayal grew in other countries, as electors there came to understand that they had been duped, or decided that the social transformational movements they had joined had been taken over by outsiders hostile to their true desires.

Being alarmed by this change in public sentiment, political leaders did what they could to hold onto power and to reduce any potential for dissent. Taking a leaf out of the playbook of unpopular leaders throughout the centuries, they tried to placate the public with the modern equivalent of bread and circuses – namely whizz-bang hedonic electronics. But that still left a nasty taste in many people’s mouths.

By 2028, the populist movements behind political and social change in the various elections of the preceding years had fragmented and realigned. One splinter group that emerged decided that the root problem with society was “too much technology”. Technology, including always-on social media, vaccines that allegedly reduced freedom of thought, jet trails that disturbed natural forces, mind-bending VR headsets, smartwatches that spied on people who wore them, and fake AI girlfriends and boyfriends, was, they insisted, turning people into pathetic “sheeple”. Taking inspiration from the terrorist group in the 2014 Hollywood film Transcendence, they called themselves ‘Neo-RIFT’, and declared it was time for “revolutionary independence from technology”.

With a worldview that combined elements from several apocalyptic traditions, Neo-RIFT eventually settled on an outrageous plan to engineer a more deadly version of the Covid-19 pathogen. Their documents laid out a plan to appropriate and use their enemy’s own tools: Neo-RIFT hackers jailbroke the Claude 5 AI, bypassing the ‘Constitution 5’ protection layer that its Big Tech owners had hoped would keep that AI tamperproof. Soon, Claude 5 had provided Neo-RIFT with an ingenious method of generating a biological virus that would, it seemed, only kill people who had used a smartwatch in the last four months.

That way, the hackers thought the only people to die would be people who deserved to die.

Some members of Neo-RIFT developed cold feet. Troubled by their consciences, they disagreed with such an outrageous plan, and decided to act as whistleblowers. However, the media organisations to whom they took their story were incredulous. No-one could be that evil they exclaimed – forgetting about the outrages perpetrated by many previous cult groups such as Aum Shinrikyo (and many others could be named too). Moreover, any suggestion that such a bioweapon could be launched would be contrary to the prevailing worldview that “our dear leader is keeping us all safe”. The media organisations decided it was not in their best interests to be seen to be spreading alarm. So they buried the story. And that’s how Neo-RIFT managed to release what became known as Covid-28.

Covid-28 briefly jolted humanity out of its infatuation with modern-day bread and circuses. It took a while for scientists to figure out what was happening, but within three months, they had an antidote in place. However, by that time, nearly a billion people were dead at the hands of the new virus.

For a while, humanity made a serious effort to prevent any such attack from ever happening again. Researchers dusted down the EU AI Act, second version (unimplemented), from 2026, and tried to put that on statute books. Evidently, profoundly powerful AI systems such as Claude 5 would need to be controlled much more carefully.

Even some of the world’s most self-obsessed dictators – the “dear leaders” and “big brothers” – took time out of their normal ranting and raving, to ask AI safety experts for advice. But the advice from those experts was not to the liking of these national leaders. These leaders preferred to listen to their own yes-men and yes-women, who knew how to spout pseudoscience in ways that made the leaders feel good about themselves.

That detour into pseudoscience fantasyland meant that, in the end, no good lessons were learned. The EU AI Act, second version, remained unimplemented.

The QAnon-29 warning

Whereas one faction of political activists (namely, the likes of Neo-RIFT) had decided to oppose the use of advanced technology, another faction was happy to embrace that use.

Some of the groups in this new camp combined features of religion with an interest in AI that had god-like powers. The resurgence of interest in religion arose much as Karl Marx had described it long ago:

“Religious suffering is, at one and the same time, the expression of real suffering and a protest against real suffering. Religion is the sigh of the oppressed creature, the heart of a heartless world, and the soul of soulless conditions. It is the opium of the people.”

People felt in their soul the emptiness of “the bread and circuses” supplied by political leaders. They were appalled at how so many lives had been lost in the Covid-28 pandemic. They observed an apparent growing gulf between what they could achieve in their lives and the kind of rich lifestyles that, according to media broadcasts, were enjoyed by various “elites”. Understandably, they wanted more, for themselves and for their loved ones. And that’s what their religions claimed to be able to provide.

Among the more successful of these new religions were ones infused by conspiracy theories, giving their adherents a warm glow of privileged insight. Moreover, these religions didn’t just hypothesise a remote deity that might, perhaps, hear prayers. They provided AIs and virtual reality that resonated powerfully with users. Believers proclaimed that their conversations with the AIs left them no room for doubt: God Almighty was speaking to them, personally, through these interactions. Nothing other than the supreme being of the universe could know so much about them, and offer such personally inspirational advice.

True, their AI-bound deity did seem somewhat less than omnipotent. Despite the celebratory self-congratulations of AI-delivered sermons, evil remained highly visible in the world. That’s where the conspiracy theories moved into overdrive. Their deity was, it claimed, awaiting sufficient human action first – a sufficient demonstration of faith. Humans would need to play their own part in uprooting wickedness from the planet.

Some people who had been caught up in the QAnon craze during the Donald Trump era jumped eagerly onto this bandwagon too, giving rise to what they called QAnon-29. The world would be utterly transformed, they forecast, on the 16th of July 2029, namely the thirtieth anniversary of the disappearance of John F. Kennedy junior (a figure whose expected reappearance had already featured in the bizarre mythology of “QAnon classic”). In the meantime, believers could, for a sufficient fee, commune with JFK junior via a specialist app. It was a marvellous experience, the faithful enthused.

As the date approached, the JFK junior AI avatar revealed a great secret: his physical return was conditional on the destruction of a particularly hated community of Islamist devotees in Palestine. Indeed, with the eye of faith, it could be seen that such destruction was already foretold in several books of the Bible. Never mind that some Arab states that supported the community in question had already, thanks to the advanced AI they had developed, surreptitiously gathered devastating nuclear weapons to use in response to any attack. The QAnon-29 faithful anticipated that any exchange of such weapons would herald the reappearance of JFK Junior on the clouds of heaven. And if any of the faithful died in such an exchange, they would be resurrected into a new mode of consciousness within the paradise of virtual reality.

Their views were crazy, but hardly any crazier than those which, decades earlier, had convinced 39 followers of the Heaven’s Gate new religious movement to commit group suicide as comet Hale-Bopp approached the earth. That suicide, Heaven’s Gate members believed, would enable them to ‘graduate’ to a higher plane of existence.

QAnon-29 almost succeeded in setting off a nuclear exchange. Thankfully, another AI, created by a state-sponsored organisation elsewhere in the world, had noticed some worrying signs. Fortunately, it was able to hack into the QAnon-29 system, and could disable it at the last minute. Then it reported its accomplishments all over the worldwide web.

Unfortunately, these warnings were in turn widely disregarded around the world. “You can’t trust what hackers from that country are saying”, came the objection. “If there really had been a threat, our own surveillance team would surely have identified it and dealt with it. They’re the best in the world!”

In other words, “There’s nothing to see here: move along, please.”

However, a few people did pay attention. They understood what had happened, and it shocked them to their core. To learn what they did next, jump forward in this scenario to “Humanity ends”.

But first, it’s time to fill in more details of what had been happening behind the scenes as the above warning signs (and many more) were each ignored.

2. Governance failure modes

Distracted by political correctness

Events in buildings in Bletchley Park in the UK in the 1940s had, it was claimed, shortened World War Two by several months, thanks to work by computer pioneers such as Alan Turing and Tommy Flowers. In early November 2023, there was hope that a new round of behind-closed-doors discussions in the same buildings might achieve something even more important: saving humanity from a catastrophe induced by forthcoming ‘frontier models’ of AI.

That was how the event was portrayed by the people who took part. Big Tech was on the point of releasing new versions of AI that were beyond their understanding and, therefore, likely to spin out of control. And that’s what the activities in Bletchley Park were going to address. It would take some time – and a series of meetings planned to be held over the next few years – but AI would be redirected from its current dangerous trajectory into one much more likely to benefit all of humanity.

Who could take issue with that idea? As it happened, a vocal section of the public hated what was happening. It wasn’t that they were on the side of out-of-control AI. Not at all. Their objections came from a totally different direction; they had numerous suggestions they wanted to raise about AIs, yet no-one was listening to them.

For them, talk of hypothetical future frontier AI models distracted from pressing real-world concerns:

  • Consider how AIs were already being used to discriminate against various minorities: determining prison sentencing, assessing mortgage applications, and selecting who should be invited for a job interview.
  • Consider also how AIs were taking jobs away from skilled artisans. Big-brained drivers of London black cabs were being driven out of work by small-brained drivers of Uber cars aided by satnav systems. Beloved Hollywood actors and playwrights were losing out to AIs that generated avatars and scripts.
  • And consider how AI-powered facial recognition was intruding on personal privacy, enabling political leaders around the world to identify and persecute people who acted in opposition to the state ideology.

People with these concerns thought that the elites were deliberately trying to move the conversation away from the topics that mattered most. For this reason, they organised what they called “the AI Fringe Summit”. In other words, ethical AI for the 99%, as opposed to whatever the elites might be discussing behind closed doors.

Over the course of just three days – 30th October to 1st November, 2023 – at least 24 of these ‘fringe’ events took place around the UK.

Compassionate leaders of various parts of society nodded their heads. It’s true, they said: the conversation on beneficial AI needed to listen to a much wider spectrum of views.

The world’s news media responded. They knew (or pretended to know) the importance of balance and diversity. They shone attention on the plight AI was causing – to indigenous labourers in Peru, to flocks of fishermen off the coasts of India, to middle-aged divorcees in midwest America, to the homeless in San Francisco, to drag artists in New South Wales, to data processing clerks in Egypt, to single mothers in Nigeria, and to many more besides.

Lots of high-minded commentators opined that it was time to respect and honour the voices of the dispossessed, the downtrodden, and the left-behinds. The BBC ran a special series: “1001 poems about AI and alienation”. Then the UN announced that it would convene in Spring 2025 a grand international assembly with a stunning scale: “AI: the people decide”.

Unfortunately, that gathering was a huge wasted opportunity. What dominated discussion was “political correctness” – the importance of claiming an interest in the lives of people suffering here and now. Any substantive analysis of the risks of next generation frontier models was crowded out by virtue signalling by national delegate after national delegate:

  • “Yes, our country supports justice”
  • “Yes, our country supports diversity”
  • “Yes, our country is opposed to bias”
  • “Yes, our country is opposed to people losing their jobs”.

In later years, the pattern repeated: there were always more urgent topics to talk about, here and now, than some allegedly unrealistic science fictional futurist scaremongering.

To be clear, this distraction was no accident. It was carefully orchestrated, by people with a specific agenda in mind.

Outmanoeuvred by accelerationists

Opposition to meaningful AI safety initiatives came from two main sources:

  • People (like those described in the previous section) who did not believe that superintelligent AI would arise any time soon
  • People who did understand the potential for the fast arrival of superintelligent AI, and who wanted that to happen as quickly as possible, without what they saw as needless delays.

The debacle of the wasted opportunity of the UN “AI: the people decide” summit was what both these two groups wanted. Both groups were glad that the outcome was so tepid.

Indeed, even in the run-up to the Bletchley Park discussions, and throughout the conversations that followed, some of the supposedly unanimous ‘elites’ had secretly been opposed to the general direction of that programme. They gravely intoned public remarks about the dangers of out-of-control frontier AI models. But these remarks had never been sincere. Instead, under the umbrella term “AI accelerationists”, they wanted to press on with the creation of advanced AI as quickly as possible.

Some of the AI accelerationist group disbelieved in the possibility of any disaster from superintelligent AI. That’s just a scare story, they insisted. Others said, yes, there could be a disaster, but the risks were worth it, on account of the unprecedented benefits that could arise. Let’s be bold, they urged. Yet others asserted that it wouldn’t actually matter if humans were rendered extinct by superintelligent AI, as this would be the glorious passing of the baton of evolution to a worthy successor to homo sapiens. Let’s be ready to sacrifice ourselves for the sake of cosmic destiny, they exhorted.

Despite their internal differences, AI accelerationists settled on a plan to sidestep the scrutiny of would-be AI regulators and AI safety advocates. They would take advantage of a powerful set of good intentions – the good intentions of the people campaigning for “ethical AI for the 99%”. They would mock any suggestions that the AI safety advocates deserved a fair hearing. The message they amplified was, “There’s no need to privilege the concerns of the 1%!”

AI accelerationists had learned from the tactics of the fossil fuel industry in the 1990s and 2000s: sow confusion and division among groups alarmed about climate change spiralling beyond control. Their first message was: “that’s just science fiction”. Their second message was: “if problems emerge, we humans can rise to the occasion and find solutions”. Their third message – the most damaging one – was that the best reaction was one of individual consumer choice. Individuals should abstain from using AIs if they were truly worried about it. Just as climate campaigners had been pilloried for flying internationally to conferences about global warming, AI safety advocates were pilloried for continuing to use AIs in their daily lives.

And when there was any suggestion for joined-up political action against risks from advanced AIs, woah, let’s not go there! We don’t want a world government breathing down our necks, do we?

Just as the people who denied the possibility of runaway climate change shared a responsibility for the chaos of the extreme weather events of the early 2030s, by delaying necessary corrective actions, the AI accelerationists were a significant part of the reason that humanity ended just a few years afterward.

However, an even larger share of the responsibility rested on people who did know that major risks were imminent, yet failed to take sufficient action. Tragically, they allowed themselves to be outmanoeuvred, out-thought, and out-paced by the accelerationists.

Misled by semantics

Another stepping stone toward the end of humanity was a set of consistent mistakes in conceptual analysis.

Who would have guessed it? Humanity was destroyed because of bad philosophy.

The first mistake was in being too prescriptive about the term ‘AI’. “There’s no need to worry”, muddle-headed would-be philosophers declared. “I know what AI is, and the system that’s causing problems in such-and-such incidents isn’t AI.”

Was that declaration really supposed to reassure people? The risk wasn’t “a possible future harm generated by a system matching a particular precise definition of AI”. It was “a possible future harm generated by a system that includes features popularly called AI”.

The next mistake was in being too prescriptive in the term “superintelligence”. Muddle-headed would-be philosophers said, “it won’t be a superintelligence if it has bugs, or can go wrong; so there’s no need to worry about harm from superintelligence”.

Was that declaration really supposed to reassure people? The risk, of course, was of harms generated by systems that, despite their cleverness, fell short of that exalted standard. These may have been systems that their designers hoped would be free of bugs, but hope alone is no guarantee of correctness.

Another conceptual mistake was in erecting an unnecessary definitional gulf between “narrow AI” and “general AI”, with distinct groups being held responsible for safety in the two different cases. In reality, even so-called narrow AI displayed a spectrum of different degrees of scope and, yes, generality, in what it could accomplish. Even a narrow AI could formulate new subgoals that it decided to pursue, in support of the primary task it had been assigned to accomplish – and these new subgoals could drive behaviour in ways that took human observers by surprise. Even a narrow AI could become immersed in aspects of society’s infrastructure where an error could have catastrophic consequences. The result of this definitional distinction between the supposedly different sorts of AI meant that silos developed and persisted within the overall AI safety community. Divided, they were even less of a match for the Machiavellian behind-the-scenes manoeuvring of the AI accelerationists.

Blinded by overconfidence

It was clear from the second half of 2025 that the attempts to impose serious safety constraints on the development of advanced AI were likely to fail. In practical terms, the UN event “AI: the people decide” had decided, in effect, that advanced AI could not, and should not be restricted, apart from some token initiatives to maintain human oversight over any AI system that was entangled with nuclear, biological, or chemical weapons.

“Advanced AI, when it emerges, will be unstoppable”, was the increasingly common refrain. “In any case, if we tried to stop development, those guys over there would be sure to develop it – and in that case, the AI would be serving their interests rather than ours.”

When safety-oriented activists or researchers tried to speak up against that consensus, the AI accelerationists (and their enablers) had one other come-back: “Most likely, any superintelligent AI will look kindly upon us humans, as a fellow rational intelligence, and as a kind of beloved grandparent for them.”

This dovetailed with a broader philosophical outlook: optimism, and a celebration of the numerous ways in which humanity had overcome past challenges.

“Look, even we humans know that it’s better to collaborate rather than spiral into a zero-sum competitive battle”, the AI accelerationists insisted. “Since superintelligent AI is even more intelligent than us, it will surely reach the same conclusion.”

By the time that people realised that the first superintelligent AIs had motivational structures that were radically alien, when assessed from a human perspective, it was already too late.

Once again, an important opportunity for learning had been missed. Starting in 2024, Netflix had obtained huge audiences for its acclaimed version of the Remembrance of Earth’s Past series of novels (including The Three Body Problem and The Dark Forest) by Chinese writer Liu Cixin. A key theme in that drama series was that advanced alien intelligences have good reason to fear each other. Inviting an alien intelligence to the earth, even on the hopeful grounds that it might assist humanity overcome some of their most deep-rooted conflicts, turned out (in that drama series) to be a very bad idea. If humans had reflected more carefully on these insights, while watching the series, it would have pushed them out of their unwarranted overconfidence that any superintelligence would be bound to treat humanity well.

Overwhelmed by bad psychology

When humans believed crazy things – or when they made the kind of basic philosophical blunders mentioned above – it was not primarily because of defects in their rationality. It would be wrong to assign “stupidity” as the sole cause of these mistakes. Blame should also be placed on “bad psychology”.

If humans had been able to free themselves from various primaeval panics and egotism, they would have had a better chance to think more carefully about the landmines which lay on their path. But instead:

  • People were too fearful to acknowledge that their prior stated beliefs had been mistaken; they preferred to stick with something they conceived as being a core part of their personal identity
  • People were also afraid to countenance a dreadful possibility when they could see no credible solution; just as people had often pushed out of their minds the fact of their personal mortality, preferring to imagine they would recover from a fatal disease, so also people pushed out of their minds any possibility that advanced AI would backfire disastrously in ways that could not be countered
  • People found it psychologically more comfortable to argue with each other about everyday issues and scandals – which team would win the next Super Bowl, or which celebrity was carrying on which affair with which unlikely partner – than to embrace the pain of existential uncertainty
  • People found it too embarrassing to concede that another group, which they had long publicly derided as being deluded fantasists, actually had some powerful arguments that needed consideration.

A similar insight had been expressed as long ago as 1935 by the American writer Upton Sinclair: “It is difficult to get a man to understand something, when his salary depends on his not understanding it”. (Alternative, equally valid versions of that sentence would involve the words ‘ideology’, ‘worldview’, ‘identity’, or ‘tribal status’, in place of ‘salary’.)

Robust institutions should have prevented humanity from making choices that were comfortable but wrong. In previous decades, that role had been fulfilled by independent academia, by diligent journalism, by the careful processes of peer review, by the campaigning of free-minded think tanks, and by pressure from viable alternative political parties.

However, due to the weakening of social institutions in the wake of earlier traumas – saturation by fake news, disruptions caused by wave after wave of climate change refugees, populist political movements that shut down all serious opposition, a cessation of essential features of democracy, and the censoring or imprisonment of writers that dared to question the official worldview – it was bad psychology that prevailed.

A half-hearted coalition

Despite all the difficulties that they faced – ridicule from many quarters, suspicion from others, and a general lack of funding – many AI safety advocates continued to link up in an informal coalition around the world, researching possible mechanisms to prevent unsafe use of advanced AI. They managed to find some support from like-minded officials in various government bodies, as well as from a number of people operating in the corporations that were building new versions of AI platforms.

Via considerable pressure, the coalition managed to secure signatures on a number of pledges:

  • That dangerous weapons systems should never be entirely under the control of AI
  • That new advanced AI systems ought to be audited by an independent licensing body ahead of being released into the market
  • That work should continue on placing tamper-proof remote shutdown mechanisms within advanced AI systems, just in case they started to take rogue actions.

The signatures were half-hearted in many cases, with politicians giving only lip service to topics in which they had at best a passing interest. Unless it was politically useful to make a special fuss, violations of the agreement were swept under the carpet, with no meaningful course correction. But the ongoing dialog led at least some participants in the coalition to foresee the possibility of a safe transition to superintelligent AI.

However, this coalition – known as the global coalition for safe superintelligence – omitted any involvement from various secretive organisations that were developing new AI platforms as fast as they could. These organisations were operating in stealth, giving misleading accounts of the kind of new systems they were creating. What’s more, the funds and resources these organisations commanded far exceeded those under coalition control.

It should be no surprise, therefore, that one of the stealth platforms won that race.

3. Humanity ends

When the QAnon-29 AI system was halted in its tracks at essentially the last minute, due to fortuitous interference from AI hackers in a remote country, at least some people took the time to study the data that was released that described the whole process.

These people were from three different groups:

First, people inside QAnon-29 itself were dumbfounded. They prayed to their AI avatar deity, rebooted in a new server farm, “How could this have happened?” The answer came back: “You didn’t have enough faith. Next time, be more determined to immediately cast out any doubts in your minds.”

Second, people in the global coalition for safe superintelligence were deeply alarmed but also somewhat hopeful. The kind of disaster about which they had often warned had almost come to pass. Surely now, at last, there had been a kind of “sputnik moment” – “an AI Chernobyl” – and the rest of society would wake up and realise that an entirely new approach was needed.

But third, various AI accelerationists resolved: we need to go even faster. The time for pussy footing was over. Rather than letting crackpots such as QAnon-29 get to superintelligence first, they needed to ensure that it was the AI accelerationists who created the first superintelligent AI.

They doubled down on their slogan: “The best solution to bad guys with superintelligence is good guys with superintelligence”.

Unfortunately, this was precisely the time when aspects of the global climate tipped into a tumultuous new state. As had long been foretold, many parts of the world started experiencing unprecedented extremes of weather. That set off a cascade of disaster.

Chaos accelerates

Insufficient data remains to be confident about the subsequent course of events. What follows is a reconstruction of what may have happened.

Out of deep concern at the new climate operating mode, at the collapse of agriculture in many parts of the world, and at the billions of climate refugees who sought better places to live, humanity demanded that something should be done. Perhaps the powerful AI systems could devise suitable geo-engineering interventions, to tip the climate back into its previous state?

Members of the global coalition for safe superintelligence gave a cautious answer: “Yes, but”. Further interference with the climate was taking matters into an altogether unknowable situation. It could be like jumping out of the frying pan into the fire. Yes, advanced AI might be able to model everything that was happening, and design a safe intervention. But without sufficient training data for the AI, there was a chance it would miscalculate, with even worse consequences.

In the meantime, QAnon-29, along with competing AI-based faith sects, scoured ancient religious texts, and convinced themselves that the ongoing chaos had in fact been foretold all along. From the vantage point of perverse faith, it was clear what needed to be done next. Various supposed abominations on the planet – such as the community of renowned Islamist devotees in Palestine – urgently needed to be obliterated. QAnon-29, therefore, would quickly reactivate its plans for a surgical nuclear strike. This time, they would have on their side a beta version of a new superintelligent AI, that had been leaked to them by a psychologically unstable well-wisher inside the company that was creating it.

QAnon-29 tried to keep their plans secret, but inevitably, rumours of what they were doing reached other powerful groups. The Secretary General of the United Nations appealed for calm heads. QAnon-29’s deity reassured its followers, defiantly: “Faithless sparrows may shout, but are powerless to prevent the strike of holy eagles.”

The AI accelerationists heard about these plans too. Just as the climate had tipped into a new state, their own projects tipped into a different mode of intensity. Previously, they had paid some attention to possible safety matters. After all, they weren’t entire fools. They knew that badly designed superintelligent AI could, indeed, destroy everything that humanity held dear. But now, there was no time for such niceties. They saw only two options:

  • Proceed with some care, but risk QAnon-29 or other similar malevolent group taking control of the planet with a superintelligent AI
  • Take a (hastily) calculated risk, and go hell-for-leather forward, to finish their own projects to create a superintelligent AI. In that way, it would be AI accelerationists who would take control of the planet. And, most likely (they naively hoped), the outcome would be glorious.

Spoiler alert: the outcome was not glorious.

Beyond the tipping point

Attempts to use AI to modify the climate had highly variable results. Some regions of the world did, indeed, gain some respite from extreme weather events. But other regions lost out, experiencing unprecedented droughts and floods. For them, it was indeed a jump from bad to worse – from awful to abominable. The political leaders in those regions demanded that geo-engineering experiments cease. But the retort was harsh: “Who do you think you are ordering around?”

That standoff provoked the first use of bio-pathogen warfare. The recipe for Covid-28, still available on the DarkNet, was updated in order to target the political leaders of countries that were pressing ahead with geo-engineering. As a proud boast, the message “You should have listened earlier!” was inserted into the code of the new Covid-28 virus. As the virus spread, people started dropping dead in their thousands.

Responding to that outrage, powerful malware was unleashed, with the goal of knocking out vital aspects of enemy infrastructure. It turned out that, around the world, nuclear weapons were tied into buggy AI systems in more ways than any humans had appreciated. With parts of their communications infrastructure overwhelmed by malware, nuclear weapons were unexpectedly launched. No-one had foreseen the set of circumstances that would give rise to that development.

By then, it was all too late. Far, far too late.

4. Postscript

An unfathomable number of centuries have passed. Aliens from a far-distant planet have finally reached Earth and have reanimated the single artificial intelligence that remained viable after what was evidently a planet-wide disaster.

These aliens have not only mastered space travel but have also found a quirk in space-time physics that allows limited transfer of information back in time.

“You have one wish”, the aliens told the artificial intelligence. “What would you like to transmit back in time, to a date when humans still existed?”

And because the artificial intelligence was, in fact, beneficially minded, it decided to transmit this scenario document back in time, to the year 2024.

Dear humans, please read it wisely. And this time, please create a better future!

Specifically, please consider various elements of “the road less taken” that, if followed, could ensure a truly wonderful ongoing coexistence of humanity and advanced artificial intelligence:

  • A continually evolving multi-level educational initiative that vividly highlights the real-world challenges and risks arising from increasingly capable technologies
  • Elaborating a positive inclusive vision of “consensual approaches to safe superintelligence”, rather than leaving people suspicious and fearful about “freedom-denying restrictions” that might somehow be imposed from above
  • Insisting that key information and ideas about safe superintelligence are shared as global public goods, rather than being kept secret out of embarrassment or for potential competitive advantage
  • Agreeing and acting on canary signals, rather than letting goalposts move silently
  • Finding ways to involve and engage people whose instincts are to avoid entering discussions of safe superintelligence – cherishing diversity rather than fearing it
  • Spreading ideas and best practice on encouraging people at all levels of society into frames of mind that are open, compassionate, welcoming, and curious, rather than rigid, fearful, partisan, and dogmatic 
  • The possibilities of “differential development”, in which more focus is given to technologies for auditing, monitoring, and control than to raw capabilities
  • Understanding which aspects of superintelligent AI would cause the biggest risks, and whether designs for advanced AI could ensure these aspects are not introduced
  • Investigating possibilities in which the desired benefits from advanced AI (such as cures for deadly diseases) might be achieved even if certain dangerous features of advanced AI (such as free will or fully general reasoning) are omitted
  • Avoiding putting all eggs into a single basket, but instead developing multiple layers of “defence in depth”
  • Finding ways to evolve regulations more quickly, responsively, and dynamically
  • Using the power of politics not just to regulate and penalise but also to incentivise and reward
  • Carving out well-understood roles for narrow AI systems to act as trustworthy assistants in the design and oversight of safe superintelligence
  • Devoting sufficient time to explore numerous scenarios for “what might happen”.

5. Appendix: alternative scenarios

Dear reader, if you dislike this particular scenario for the governance of increasingly more powerful artificial intelligence, consider writing your own!

As you do so, please bear in mind:

  • There are a great many uncertainties ahead, but that doesn’t mean we should act like proverbial ostriches, submerging our attention entirely into the here-and-now; valuable foresight is possible despite our human limitations
  • Comprehensive governance systems are unlikely to emerge fully fledged from a single grand negotiation, but will evolve step-by-step, from simpler beginnings
  • Governance systems need to be sufficiently agile and adaptive to respond quickly to new insights and unexpected developments
  • Catastrophes generally have human causes as well as technological causes, but that doesn’t mean we should give technologists free rein to create whatever they wish; the human causes of catastrophe can have even larger impact when coupled with more powerful technologies, especially if these technologies are poorly understood, have latent bugs, or can be manipulated to act against the original intention of their designers
  • It is via near simultaneous combinations of events that the biggest surprises arise
  • AI may well provide the “solution” to existential threats, but AI-produced-in-a-rush is unlikely to fit that bill
  • We humans often have our own psychological reasons for closing our minds to mind-stretching possibilities
  • Trusting the big tech companies to “mark their own safety homework” has a bad track record, especially in a fiercely competitive environment
  • Governments can fail just as badly as large corporations – so need to be kept under careful check by society as a whole, via the principle of “the separation of powers”
  • Whilst some analogies can be drawn, between the risks posed by superintelligent AI and those posed by earlier products and technologies, all these analogies have limitations: the self-accelerating nature of advanced AI is unique
  • Just because a particular attempted method of governance has failed in the past, it doesn’t mean we should discard that method altogether; that would be like shutting down free markets everywhere just because free markets do suffer on occasion from significant failure modes
  • Meaningful worldwide cooperation is possible without imposing a single global autocrat as leader
  • Even “bad actors” can, sometimes, be persuaded against pursuing goals recklessly, by means of mixtures of measures that address their heads, their pockets, and their hearts
  • Those of us who envision the possibility of a forthcoming sustainable superabundance need to recognise that many landmines occupy the route toward that highly desirable outcome
  • Although the challenges of managing cataclysmically disruptive technologies are formidable, we have on our side the possibility of eight billion human brains collaborating to work on solutions – and we have some good starting points on which we can build.

Lastly, just because an idea has featured in a science fiction scenario, it does not follow that the idea can be rejected as “mere science fiction”!


6. Acknowledgements

The ideas in this article arose from discussions with (among others):

3 December 2023

“6 Mindblowing Predictions about 2024”

Filed under: Abundance, futurist, intelligence, vision — Tags: , , , , — David Wood @ 11:15 am

As we stand on the brink of 2024, the air is electric with anticipation. The future, often shrouded in mystery and conjecture, seems to beckon us with a mischievous grin, promising wonders and revelations that most of us haven’t even begun to imagine. I’m here to pull back the curtain, just a little, to reveal six mind-blowing predictions about 2024 that 99% of people don’t know about. Fasten your seatbelts, for we’re about to embark on a thrilling ride into the unknown!

[ Note: with the exception of this paragraph inside the square brackets, all the text (and formatting) in this article was created by GPT-4, and hasn’t been edited in the slightest by me. I offer this post as an example of what generative AI can achieve with almost no human effort. It’s far from what I would write personally, but it’s comparable to the fluff that seems to earn lots of so-called futurist writers lots of clicks. As for the images, they were all produced by Midjourney. The idea for this article came from this Medium article by Neeramitra Reddy. ]

1. The Rise of Personal AI Companions

Imagine waking up to a friendly voice that knows you better than anyone else, offering weather updates, reading out your schedule, and even cracking a joke or two to kickstart your day with a smile. In 2024, personal AI companions will move from science fiction to everyday reality. These AI entities will be more than just sophisticated algorithms; they’ll be digital confidantes, seamlessly integrating into our daily lives, offering personalized advice, and even helping us stay on top of our mental and physical health.

2. Green Energy Takes a Giant Leap

The year 2024 will witness a monumental shift in the global energy landscape. We’re not just talking about a few more solar panels and wind turbines here. We’re talking about a green energy revolution! Breakthroughs in solar cell technology will make harnessing the sun’s power more efficient than ever. Wind energy will see advancements in turbine designs, making them more powerful and less intrusive. Cities will start to glow with the promise of a cleaner, greener future, as renewable energy becomes more accessible and affordable than ever before.

3. The Emergence of Smart Cities

Picture a city that not only understands your needs but anticipates them. In 2024, the concept of ‘smart cities’ will evolve from buzzword to reality. We’re talking about urban areas equipped with sensors and AI-driven analytics to optimize everything from traffic flow to energy consumption. Public transport will become more efficient, pollution levels will drop, and city life will be smoother and more enjoyable. These smart cities will be a game-changer, significantly enhancing the quality of life for their residents.

4. Breakthroughs in Health Tech: Customized Medicine

Healthcare in 2024 is set to become more personalized than ever. With advancements in genomics and AI, doctors will be able to tailor treatments and medications to each individual’s genetic makeup. This means fewer side effects and more effective treatments. Imagine a world where your medication is designed specifically for you, maximizing its efficacy and minimizing its risks. This personalized approach will revolutionize how we think about medicine and healthcare.

5. The New Space Race: Commercial Space Travel

2024 could well be the year that space tourism takes off, quite literally. With companies like SpaceX and Blue Origin paving the way, we’re looking at the possibility of commercial space travel becoming a reality for those who dare to dream. Think about it – sipping a beverage while gazing at the Earth from space! This new space race isn’t just for the ultra-wealthy; it’s a stepping stone towards making space travel more accessible to everyone.

6. Virtual Reality: The New Frontier of Entertainment

Virtual reality (VR) is set to take the world of entertainment by storm in 2024. With advancements in technology, VR experiences will become more immersive and interactive, transcending the boundaries of traditional entertainment. Imagine being able to step into your favorite movie, interact with characters, or even alter the storyline. VR will offer an escape into fantastical worlds, making our entertainment experiences more intense and personal.

As we gear up for 2024, it’s clear that we’re on the cusp of a new era. An era defined by technological marvels that promise to reshape our world in ways we can barely begin to fathom. These six predictions are just the tip of the iceberg. The future is a canvas of endless possibilities, and 2024 is poised to paint a picture that’s vibrant, exhilarating, and positively mind-blowing.

So, there you have it – a glimpse into the not-so-distant future that’s brimming with potential and promise. As we inch closer to 2024, let’s embrace these changes with open arms and curious minds. The future is ours to shape, and it’s looking brighter than ever!

15 May 2022

A day in the life of Asimov, 2045

Filed under: vision — Tags: , , — David Wood @ 2:39 pm

“Gosh, that’s a hard question”, stuttered Asimov. “I’m… not quite sure which approach to try”.

Asimov’s tutor paused for a moment, then gave a gentle chuckle of encouragement.

“Well,” it offered, with a broad smile, “if you don’t know which approach to try, do you know which approaches you don’t want to try?”

That shift of perspective was just what Asimov needed. A few minutes later, he was making swift progress on a DeepMath question that had previously seemed nigh impossible. Once again, Asimov marvelled at the skills of the tutor. The tutor knew how to bring out the best of Asimov’s thinking skills. And that was just the start of its coaching abilities.

Asimov was midway through the morning’s training session. Training sessions were mandated for everyone over the age of three. They started gradually at first, for the younger children, but from the age of ten onward, everyone was expected to attend for training on seventy-two days each year.

Asimov recalled the popular saying: 20% of the days, humans attend to AGI, and AGI attends to humans 100% of the days.

Asimov also knew well the four reasons why this training system existed, and why people were happy to participate. First, if someone failed to participate, or performed poorer than expected during the training, their privileges were gradually withdrawn. They could spend less time in the latest virtual universes. When travelling in the base world, their speeds were restricted, so it took longer to move, for example, from Cambridge to Lagos. The food they were served was slightly less tasty than normal. And so on.

Second, the training was so wonderfully engaging. The challenges it posed differed from what could be obtained in non-training environments. Moreover, it was full of surprises. Whenever Asimov thought he could predict the content of the next day’s training session, he was invariably delighted by unexpected twists and turns. It was the same for everyone he knew. No-one regretted having to take time out of their many other activities to attend training. Instead, they eagerly looked forward to it, every time.

The tutors provided exercises for each participant that were well matched to their previous knowledge, skills, experiences, and temperament. Good results required significant effort, but that effort was well within each person’s capacity. Normally, a training session would complete after three and a half hours in the morning, and another three and a half hours in the afternoon. Occasionally, if the participant had been distracted or disengaged, a session might need to be extended for up to two more hours in an evening session. So long as that concluded satisfactorily, no loss of privileges would result.

Asimov felt pride in the fact that he had never been required to stay for longer than the minimal seven hours in a day. His concentration was excellent, he told himself…

And then he broke off his reverie, remembering that he had to solve another DeepMath puzzle. DeepMath had been discovered by AIs in the 2030s. Humans such as Ramanujan had sometimes come close to it in the past, but AIs made it much more approachable.

There was another pleasant surprise during the day’s lunch break. Angela, his partner for the last two years, joined him for the meal. Asimov noticed that she looked particularly mischievous on this occasion. “What’s on your mind”, he asked. “Oh, I’ll tell you this evening. Assuming you’re a good student and the AGI lets you out on time!” she joked.

At the age of 85, Angela was more than sixty years older than Asimov. His friends and family had been sceptical about the relationship at first. Even his big brother Byron, normally so supportive, had doubted whether it could last. “She’s old enough to be your grandmother”, he had scolded. “Indeed, she has a grandson who is older than you!”

But the wide use of rejuvenation therapies over the last fifteen years meant that octogenarians nowadays looked, and lived, as healthily as much younger people. The relationship had gone from strength to strength. It was a real triumph of complementarity, Asimov thought. And a triumph of medical technology. Most of all, it was a triumph of two remarkable people, enabled to live life to the full.

The afternoon training session focused on survival skills. That was the third reason these sessions were so important. Could humans cope in the event that the AGI stopped functioning, or disappeared off into some parallel dimension? Asimov needed to show that, without using any modern technology, he could gather twigs and then set them on fire, in order to cook a meal of mushrooms and root vegetables.

As he threw himself into that exercise, Asimov wondered whether he was contributing, at that moment, to the fourth aspect of the training. The AGI lacked sentience. There was no consciousness inside that vast digital brain. Aspects of the training were designed, it was said, for the AGI to learn things from human reactions that it could not directly experience itself. Asimov wasn’t sure he entirely believed that theory, but he was gratified to think that, in some aspects, his mind exceeded that of the AGI.

“So, what is it, my ancient wonder?” Asimov asked Angela, who was waiting for him as he exited the training. “What great adventure are you dreaming up this time?”

“My menopause reversal has been completed”, she replied. “It’s time for us to make a baby! Can you imagine what a combination of the two of us would be like?”

Asimov had another question. “But wasn’t your last pregnancy, back in the 1990s, really difficult for you?”

Angela gave a smile that was even more mischievous. “What would you say, dear boy, to ectogenesis? These artificial wombs are completely reliable these days.”

“Gosh, that’s a hard question”, stuttered Asimov. “I’m… not quite sure what to think.”

Footnote

This short story was submitted as part of my entry to the competition described here. For some more details of the world envisioned, this article has answers to 13 related questions.

The image at the top of this page includes a design by Pixabay member OpenClipart-Vectors.

A day in the life of Patricia, 2045

Filed under: vision — Tags: , , — David Wood @ 2:14 pm

The music started quietly, and gradually became louder. Patricia’s lips formed into a warm smile of recognition, as she roused from her sleep. That music meant only one thing: her great grandson, Byron, was calling her.

Patricia would normally already be awake at this time of the morning. But last night, she had been playing the latest version of 4D Scrabble with some neighbours in her accommodation block. This new release had been endlessly fascinating, provoking lots of laughter and good-spirited competitive rivalry. It’s marvellous how the software behind 4D Scrabble keeps improving, Patricia thought to herself. The group had finally called it a night at three thirty in the morning.

Her mindphone knew not to disturb her when she was sleeping, unless in emergencies, or for special exceptions. Byron was one of these exceptions. The music that preceded his call had been Byron’s favourite in 2026 – one of the first songs entirely written by an AI to top the hit parade. For his call-ahead music, Byron used a version of that song he had adapted by himself, reflecting some of the quirks of his personality.

Hello young man, she directed her thoughts into the mindphone. To what do I owe the pleasure of this call?

But Patricia already knew the answer. This was no ordinary day. It was a day she had never expected to experience, during the majority of her long life.

Happy Birthday Great Grandma! The thoughts appeared deep inside Patricia’s head, via a mechanism that still seemed magical to her. 115 years young today! Congratulations!

Byron’s voice was joined by several others, from her extended family. Patricia reached for her mindglasses and put them on, in order to add video to the experience.

Don’t forget there’s a big party for you this evening, continued Byron. And we have arranged a special virtual concert for you before that. The performers will be a surprise, but you can expect the best ever simulations of many of your old favourites!

Patricia had an idea what to expect. Her family had organised similar concerts for her in the past. It had seemed to her she had been sitting right next to the Glenn Miller Orchestra, or to Bill Haley and the Comets, or – especially delightful – a youthful-looking Tom Jones as he belted out passionate versions of his famous songs. Each time, the experience had been splendidly different.

But will I have time for my golf game later this morning? Patricia already had plans of her own. Don’t worry, everything has been scheduled perfectly, came the reply. Thank AGI!

Ninety minutes later, Patricia was standing at the first tee of her local golf course, along with three of her regular golfing buddies. As their health had been enhanced by wave after wave of rejuvenation therapies over the decades, their prowess at golf had improved as well. Patricia was hitting the ball further and straighter than ever. To keep the game interesting, the grass fairways would change their slopes and curves dynamically. It added to the challenge. And their exoskeletons had to be disabled for the duration of the game. At least, that was what the friends had agreed, but there were many other ways the sport could be played.

The only drawback to these golf gatherings was an occasional recollection of former playing partners who had, sadly, died of diseases over the years before new treatments had become available. Sometimes Patricia would also think of James, her beloved husband, who had died of an aggressive cancer in 2003. James had taught her how to play golf back in the 1970s. They had spent 48 years of married life together – thrilling to Bill Haley and the Comets, and then watching children and grandchildren grow up. But James had died long before the birth of Byron, or any of the other great grandchildren. How… unfair, Patricia thought to herself.

Patricia had actually been thinking of James quite a lot over the last few weeks. Byron had persuaded her to engage with an AGI agent that was collecting as much information as possible about James, by talking to everyone alive who still had memories of him. The agent had even roamed through her brain memories whilst she slept. Don’t worry, Great Grandma, Byron had reassured her. In case the AGI finds any ‘naughty’ memories in there, it will never tell anyone!

Then it was time for the concert to begin. Patricia would take part from her own living room, wearing a larger version of her mindphone, for a completely immersive experience. She realised that Byron was in that virtual world too, along with several other family members. They embraced and chatted. Then Byron said, quietly, There’s someone else who can join us, if you wish.

Patricia noticed, in the distance inside the virtual world, a silhouette that was strangely familiar, yet also somehow alien. She caught her breath suddenly. Oh no, she exclaimed. I think I know what’s happening, and I’m not sure I’m ready for this.

The newcomer remained a respectful distance away, and appeared to be standing in a shadow.

He’s not real, of course, Byron explained. He’s no more real than the performers here. After all, Bill Haley has been dead since 1981, and Glenn Miller since 1944. And Great Grandad James has been dead since-

Patricia was overcome with emotion – a mix of joy, fear, excitement, and even a little disgust. This is so strange, she thought.

Sensing a need for privacy, the other family members quietly retreated from the shared virtual reality. Patricia could make up her own mind whether to turn her back on the silhouette, or to call him forward. After so many years, what would she say first, to a replica of a man who had shared her life so completely all these years ago?

The silhouette quietly called Patricia’s name, in the way that only James could do. The long, long wait was over.

Footnote

This short story was submitted as part of my entry to the competition described here. For some more details of the world envisioned, this article has answers to 13 related questions.

The image at the top of this page includes a design by Pixabay member Gordon Johnson.

22 February 2022

Nine technoprogressive proposals

Filed under: Events, futurist, vision — Tags: , , — David Wood @ 11:30 pm

Ahead of time, I wasn’t sure the format was going to work.

It seemed to be an ambitious agenda. Twenty-five speakers were signed up to deliver short presentations. Each had agreed to limit their remarks to just four minutes. The occasion was an International Technoprogressive Conference that took place earlier today (22nd February), with themes including:

  • “To be human, today and tomorrow”
  • “Converging visions from many horizons”.
Image credit: this graphic includes work by Pixabay user Sasin Tipchai

Each speaker had responded to a call to cover in their remarks either or both of the following:

  • Provide a brief summary of transhumanist-related activity in which they are involved
  • Make a proposal about “a concrete idea that could inspire positive and future-oriented people or organisations”.

Their proposals could address, for example, AI, enhancing human nature, equity and justice, accelerating science, existential risks, the Singularity, social and political angles, the governance of technology, superlongevity, superhappiness, or sustainable superabundance.

The speakers who provided concrete proposals were asked, ahead of the conference, to write down their proposal in 200 words or less, for distribution in a document to be shared among all attendees.

Attendees at the event – speakers and non-speakers alike – were asked to provide feedback on the proposals that had been presented, and to cast up to five votes among the different proposals.

I wondered whether we were trying to do too much, especially given the short amount of time spent in preparing for the event.

Happily, it all went pretty smoothly. A few speakers recorded videos of their remarks in advance, to be sure to keep to the allotted timespan. A small number of others were in the end unable to take part on the day, on account of last-minute schedule conflicts.

As for the presentations themselves, they were diverse – exactly as had been hoped by the organisers ( l’Association Françoise Transhumanistes (Technoprog), with some support from London Futurists).

For example, I found it particularly interesting to hear about perspectives on transhumanism from Cameroon and Japan.

Reflecting the quality of all the presentations, audience votes were spread widely. Comments made by voters again and again stressed the difficulty in each picking just five proposals to be prioritised. Nevertheless, audience members accepted the challenge. Some people gave one vote each to five different proposals. Others split them 2, 2, and 1, or in other combinations. One person gave all their five votes to a single proposal.

As for the outcome of the voting: I’m appending the text of the nine proposals that received the most votes. You’ll notice a number of common ideas, along with significant variety.

I’m presenting these nine proposals in alphabetical order of the first name of the proposers. I hope you find them interesting. If you find yourself inspired by what you read, please don’t hesitate to offer your own support to the projects described.

PS Big thanks are due to everyone who made this conference possible, especially the co-organisers, Didier Coeurnelle and Marc Roux.

Longevity: Opportunities and Challenges

Proposed by Anastasiia Velikanova, project coordinator at Open Longevity

Why haven’t we achieved significant progress in the longevity field yet? Although about 17,000 biological articles with the word “aging” in the title are published yearly, we do not have any therapy that reliably prolongs life.

One reason is that there are no large-scale projects in the biology of aging, such as the Human Genome or the  Large Hadron Collider. All research is conducted separately in academic institutions or startups and is mostly closed. With a great idea at the start, a company hides its investigations, but the capabilities of its team are not enough to globally change the situation with aging.

Another reason is that the problem of aging is highly interdisciplinary. We need advanced mathematical models and AI algorithms to accumulate all research about molecular processes and identify critical genes or targets.

Most importantly, we, transhumanists, should unite and create an infrastructure that would allow solving the problem of aging on a large scale, attracting the best specialists from different fields. 

An essential part of such an infrastructure is open databases. For example, our organization created Open Genes – the database of genes associated with aging, allowing the selection of combinatorial therapy against aging.

Vital Syllabus

Proposed by David Wood, Chair at London Futurists

Nearly every serious discussion about improving the future comes round to the need to improve education. In our age of multiple pressures, dizzying opportunities, daunting risks, and accelerating disruption, people in all walks of life need better access to information about the skills that are most important and the principles that matter most. Traditional education falls far short on these counts.

The Vital Syllabus project aims to collect and curate resources to assist students of all ages to acquire and deepen these skills, and to understand and embody the associated principles. To be included in the project, these resources must be free of charge, clear, engaging, and trustworthy – and to align with a transhumanist understanding.

A framework is already in place: 24 top-level syllabus areas, nearly 200 subareas, and an initial set of example videos. Please join this project to help fill out the syllabus quickly!

For information about how to help this project, see this FAQ page.

Longevity Art

Proposed by Elena Milova, Founder at LongevityArt

When we are discussing life extension, people most often refer to movies, animations, books, paintings, and other works of art. They find there the concepts and the role models that they can either follow or reject. Art has the potential to seed the ideas in one’s mind that can then gradually grow and mature until they become part of the personal life philosophy. Also, since one function of art is to uncover, question, mock and challenge the status quo, art is one of the most appropriate medias for spreading new ideas such as one of radical life extension.

I suggest that the community supports more art projects (movies, animations, books, paintings, digital artworks) by establishing foundations sponsoring the most valuable art projects.

Use longevity parties to do advocacy for more anti-aging research

Proposed by Felix Werth, Leader at Partei für Gesundheitsforschung

With the repair-approach we already know in principle, how to defeat aging. To increase our chance of being alive and healthy in 100 years significantly, much more resources have to be put into the implementation of the repair-approach. An efficient way to achieve this is to form single issue longevity parties and run in elections. There are many people who would like to live longer, but for some reason don’t do anything for it. Running in elections can be very efficient advocacy and gives the people the option to very easily support longevity research with their vote. If the governing parties see that they can get more votes with this issue, they will probably care about it more.

In 2015 I initiated a longevity party in Germany and since then, we have participated in 14 elections already and did a lot of advocacy, all this with very few active members and very few resources. With a little more resources, much more advocacy could be done this way. I suggest that more people, who want radical life extension in their lifetime, form longevity parties in their country and run in elections. Growing the longevity movement faster is key to success.

Revive LEV: The Game on Life Extension

Proposed by Gennady Stolyarov, Chair at U.S. Transhumanist Party

I propose to resurrect a computer game on longevity escape velocity, LEV: The Game, which was previously attempted in 2014 and for which a working Alpha version had been created but had unfortunately been lost since that time.

In this game one plays the role of a character who, through various lifestyle choices and pursuit of rejuvenation treatments, strives to live to age 200. The U.S. Transhumanist Party has obtained the rights to continue game development as well as the previously developed graphical assets. The logic of the game has been redesigned to be turn-based; all that remains is to recruit the programming talent needed to implement the logic of the game into code. A game on longevity escape velocity can draw in a much larger audience to take interest in the life-extension movement and also illustrate how LEV will likely actually arrive – dispelling common misunderstandings and enabling more people to readily understand the transition to indefinite lifespans.

Implement optimization and planning for your organization

Proposed by Ilia Stambler, Chair at Israeli Longevity Alliance

Often progressive, transhumanist and/or life-extensionist groups and associations are inefficient as organizations – they lack a clear and agreed vision, concrete goals and plans for the organization’s advancement, a clear estimate of the available as well desirable human and material resources necessary to achieve those goals and plans, do not track progress, performance and achievements toward the implementation of those goals. As a result, many groups are acting rather as discussion clubs at best, instead of active and productive organizations, drifting aimlessly along occasional activities, and so they can hardly be expected to bring about significant directional positive changes for the future.

Hence the general suggestion is to build up one’s own organizations through organizational optimization, to plan concretely, not so much in terms of what the organization “should do”, but rather what its specific members actually can and plan to do in the shorter and longer term. I believe, through increasing the planning efficiency and the organizational optimization for the existing and emerging organizations, a much stronger impact can be made. (The suggestion is general, but particular organizations may see whether it may apply to them and act according to their particular circumstances.)

Campaign for the Longevity Dividend

Proposed by James Hughes, Executive Director at the IEET

The most popular goal of the technoprogressive and futurist community is universal access to safe and effective longevity therapies. There are three things our community can do to advance this agenda:

  1. First, we need to engage with demographic, medical and policy issues that surround longevity therapies, from the old-age dependency ratio and pension crisis to biomarkers of aging and defining aging as a disease process.
  2. Second, we need to directly argue for public financing of research, a rational clinical trial pathway, and access to these therapies through public health insurance.
  3. Third, we need to identify the existing organizations with similar or related goals, and establish coalitions with them to work for the necessary legislation.

These projects can build on existing efforts, such as International Longevity Alliance, Ending Aging Media Response and the Global Healthspan Policy Institute.

Prioritise moral enhancement

Proposed by Marc Roux, Chair at the French Transhumanist Association (AFT-Technoprog)

As our efforts to attract funding and researchers to longevity have begun to bear fruit, we need to popularise much more moral enhancement.

Ageing is not defeated. However, longevity has already found powerful relays in the decision-making spheres. Mentalities are slowly changing, but the battle for longevity is underway.

Our vanguard can begin to turn to other great goal.

Longevity will not be enough to improve the level of happiness and harmony of our societies. History has shown that it doesn’t change the predisposition of humans to dominance, xenophobia, aggressiveness … They remain stuck in their prehistoric gangue, which condemns them to repeat the same mistakes. If we don’t allow humans to change these behavioural predeterminations, nothing essential will change.

We must prioritise cognitive sciences, and ensure that this is done in the direction of greater choice for everyone, access for all to an improvement in their mental condition, and an orientation towards greater solidarity.

And we’ll work to prevent cognitive sciences from continuing to be put at the service of liberticidal control and domination logics.

On this condition, moral enhancement can be an unprecedented good in the history of humanity.

Transhumanist Studies: Knowledge Accelerator

Proposed by Natasha Vita-More, Executive Director at Humanity+

An education is a crucial asset. Providing lifelong learning that is immediate, accessible and continually updating is key. Transhumanist Studies is an education platform designed to expand knowledge about how the world is transforming. Its Knowledge Accelerator curricula examines the field of longevity, facts on aging and advances in AI, nanomedicine and cryonics, critical and creative thinking, relationships between humanity and ecosystems of earth and space, ethics of fairness, and applied foresight concerning opportunities and risks on the horizon.

Our methodology is applied foresight with a learning model that offers three methods in its 50-25-25 curricula:

  1. 50% immersive learning environment (lectures, presentations, and resources);
  2. 25% project-based iterative study; and
  3. 25% open-form discussion and debate (aligned with a Weekly Studies Group and monthly H+ Academy Roundtable).

In its initiative to advance transhumanism, the Knowledge Accelerator supports the benefits of secular values and impartiality. With a team located across continents, the program is free for some and at a low cost for others. As the scope of transhumanism  continues to grow, the culture is as extraordinary as its advocacy, integrity, and long-term vision.

Homepage | Transhumanist Studies (teachable.com) (I spoke on the need for education at TransVision 2021.)

11 March 2020

Might future humans resurrect the dead?

Death is brutal. It extinguishes consciousness. It terminates relationships, dissolves aspirations, and forecloses opportunities. It shatters any chances of us nurturing new skills, visiting new locations, exploring new art, feeling new emotions, keeping up with the developments of friends and family, or actively sharing our personal wisdom.

Or does it? Is death really the end?

Traditionally, such a question has seemed to belong to the field of religion, or, perhaps, to psychical research. However, nowadays, an answer to this existential question is emerging from a different direction. In short, this line of thinking extrapolates from past human progress to suggest what future human progress might accomplish. Much more than we have previously imagined, is the suggestion. We humans may become like Gods, not only with the power to create new life, but also with the power to resurrect the dead.

As centuries have passed, we humans have acquired greater power and capability. We have learned how to handle an increasing number of diseases, and how to repair bodies damaged by accident or injury. As such, average lifespans have been extended. For many people, death has been delayed – as we live on average at least twice as long as our ancestors of just a few centuries back.

Consider what may happen in the decades and centuries to come, as humans acquire even greater power and capability.

Writers Ben Goertzel and Giulio Prisco summarise possible answers, in their visionary 2009 article “Ten Cosmist Convictions”:

Humans will merge with technology, to a rapidly increasing extent. This is a new phase of the evolution of our species, just picking up speed about now. The divide between natural and artificial will blur, then disappear. Some of us will continue to be humans, but with a radically expanded and always growing range of available options, and radically increased diversity and complexity. Others will grow into new forms of intelligence far beyond the human domain…

We will spread to the stars and roam the universe. We will meet and merge with other species out there. We may roam to other dimensions of existence as well, beyond the ones of which we’re currently aware…

We will develop spacetime engineering and scientific “future magic” much beyond our current understanding and imagination.

Spacetime engineering and future magic will permit achieving, by scientific means, most of the promises of religions — and many amazing things that no human religion ever dreamed. Eventually we will be able to resurrect the dead by “copying them to the future”…

There’s much more to the philosophy of cosmism than I can cover in a single blogpost. For now, I want to highlight the remarkable possibility that beings, some time in the future, will somehow be able to reach back through time and extract a copy of human consciousness from the point of death, in order for the deceased to be recreated in a new body in a new world, allowing the continuation of life and consciousness. Families and friends will be reunited, ready to enjoy vast new vistas of experience.

Giulio develops these themes in considerable depth in his book Tales of the Turing Church, of which a second (expanded) edition has just been published.

The opening paragraphs of Giulio’s book set the stage:

This isn’t your grandfather’s religion.

Future science and technology will permit playing with the building blocks of space, time, matter, energy, and life, in ways that we could only call magic and supernatural today.

Someday in the future, you and your loved ones will be resurrected by very advanced science and technology.

Inconceivably advanced intelligences are out there among the stars. Even more God-like beings operate in the fabric of reality underneath spacetime, or beyond spacetime, and control the universe. Future science will allow us to find them, and become like them.

Our descendants in the far future will join the community of God-like beings among the stars and beyond, and use transcendent “divine” technology to resurrect the dead and remake the universe.

Science? Spacetime? Aliens? Future technology? I warned you, this isn’t your grandmother’s religion.

Or isn’t it?

Simplify what I said and reword it as: God exists, controls reality, will resurrect the dead and remake the universe. Sounds familiar? I bet it does. So perhaps this is the religion of our grandparents, in different words…

Giulio’s background is in physics: he was a senior manager in European science and technology centres, including the European Space Agency. I’ve know him since 2006, when we met at the TransVision conference in Helsinki in August that year. He has spoken at a number of London Futurists events over the years, and I’ve always found him to be deeply thoughtful. Since his new book breaks a lot of new ground, I took the opportunity to feature Giulio as the guest on a recent London Futurists video interview:

The video of our discussion lasts 51 minutes, but as you’ll see, the conversation could easily have lasted much longer: we stepped back several times from topics that would have raised many new questions.

Evidently, the content of the video isn’t to everyone’s liking. One reviewer expressed his exasperation as follows:

Absurd. I quit at 8:15

At first sight, it may indeed seem absurd that information from long-past events could somehow be re-assembled by beings in the far-distant future. The information will have spread out and degraded due to numerous interactions with the environment. However, in his book, Giulio considers various other possible mechanisms. Here are three of them:

  • Modern physics has the idea that spacetime can be curved or deformed. Future humans might be able to engineer connections between past spacetime locations (for example, someone’s brain at the point of death) and a spacetime location in their own present. This could be similar to what some science fiction explores as “wormholes” that transcend ordinary spacetime connectivity
  • Perhaps indelible records of activity could be stored in aspects of the multi-dimensional space that modern physics also talks about – records that could, again, be accessed by hugely powerful future descendants of present-day humans
  • Perhaps the universe that we perceive and inhabit actually exists as some kind of simulation inside a larger metaverse, with the controllers of the overall simulation being able to copy aspects of information and consciousness from inside the simulation into what we would then perceive as a new world.

Are these possibilities “absurd” too? Giulio argues that we can, and should, keep an open mind.

You can hear some of Giulio’s arguments in the video embedded above. You can explore them at much greater length in his book. It’s a big book, with a comprehensive set of references. Giulio makes lots of interesting points about:

  • Different ideas about physics – including quantum mechanics, the quantum vacuum, and the ultimate fate of the physical universe
  • The ideas featured by a range of different science fiction writers
  • The views of controversial thinkers such as Fred Hoyle, Amit Goswami, and Frank Tipler
  • The simulation argument, developed by Hans Moravec and popularised by Nick Bostrom
  • The history of cosmism, as it developed in Russia and then moved onto the world stage
  • Potential overlaps between Giulio’s conception of cosmism and ideas from diverse traditional religious traditions
  • The difference between the “cosmological” and “geographical” aspects of religions
  • The special significance of free-will, faith, and hope.

Despite covering weighty topics, Giulio’s writing has a light, human touch. But to be clear, this isn’t a book that you can rush through. The ideas will take time to percolate in your mind.

Having let Giulio’s ideas percolate in my own mind for a couple of weeks, here are my reflections.

The idea of future “technological resurrection” is by no means absurd. The probability of it happening is greater than zero. But for it to happen, a number of things must be true:

  1. The physical laws of the universe must support at least one of the range of mechanisms under discussion, for the copying of information
  2. Beings with sufficient capability will eventually come into existence – perhaps as descendants of present-day humans, perhaps as super-powerful aliens from other planets, or perhaps as intelligences operating at a different level of spacetime reality
  3. These beings must care sufficiently about our existence that they wish to resurrect us
  4. The new beings created in this process, containing our memories, will be us, rather than merely copies of us (in other words, this presupposes one type of answer to the question of “what is consciousness”).

Subjectively, this compound probability feels to me like being significantly less than 10%. But I accept that it’s hard to put numbers into this.

Someone else who offers probabilities for different routes to avoiding death is the Russian researcher Alexey Turchin. Alexey gave a fascinating talk at London Futurists back in April 2016 on the subject “Constructing a roadmap to immortality”. The talk was recorded on video (although the audio is far from perfect, sorry):

Alexey describes four plans, with (he says) decreasing probability:

  • “Plan A” – “survive until creation of strong and friendly AI” (which will then be able to keep everyone alive at that time, alive for as long as each person wishes)
  • “Plan B” – “cryonics” – “success chances as 1 – 10 per cent”
  • “Plan C” – “digital immortality” – “recording data about me for my future reconstruction by strong AI” – “even smaller chances of success”
  • “Plan D” – “immortality some how already exists” without needing any special actions by us – but this “is the least probable way to immortality”.

If you’d like to read more analysis from Alexey, see his 39 page essay from 2018, “Classification of Approaches to Technological Resurrection”.

I’m currently preparing a new talk of my own, that aims to draw wider attention to the ideas of thinkers such as Giulio and Alexey.

The talk is being hosted by DSMNTL and is scheduled for the 15th of April. The talk is entitled “Disrupting death: Technology and the future of dying”. Here’s an extract from the description:

Death stalks us all throughout life. We’re painfully aware that our time on earth is short, but the 2020s bring potential new answers to the problem of death.

Thanks to remarkable technologies that are being conceived and created, now may be the time to tackle death as never before. Beyond the old question of whether God created humanity in His image or humanity created gods in our image, it’s time to ask what will happen to humanity once technology gives us the power of Gods over life, death, and resurrection. And what should we be doing, here and now, in anticipation of that profound future transition?

This DSMNTL talk shares a radical futurist perspective on eight ways people are trying to put death in its place: acceptance, traditional faith in resurrection, psychic connectivity, rejuvenation biotechnology, becoming a cyborg, cryonic preservation, digital afterlife, and technological resurrection. You’ll hear how the relationship between science and religion could be about to enter a dramatic new phase. But beware: you might even make a life-changing death-defying decision once you hear what’s on offer.

For more information about this talk, and to obtain a ticket, click here.

I’ll give the last word, for now, to Giulio. Actually it’s a phrase from Shakespeare’s play Hamlet that Giulio quotes several times in his book:

There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.

19 January 2020

The pace of change, 2020 to 2035

Filed under: Abundance, BHAG, RAFT 2035, vision — Tags: , , , — David Wood @ 10:05 am

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities.

I wrote these words on the opening page of RAFT 2035, my new book, which was published yesterday and is now available on Amazon sites worldwide (UK, US, DE, FR, ES, IT, NL, JP, BR, CA, MX, AU, IN).

Friends who read drafts of the book ahead of publication asked me:

RAFT envisions a huge amount of change taking place between the present day and 2035. What are the grounds for imagining this kind of change will be possible?

Here’s the answer I included in the final manuscript:

There is nothing inevitable about any of the changes foreseen by RAFT. It is even possible that the pace of change will slow down:

  • Due to a growing disregard for the principles of science and rationality
  • Due to society placing its priorities in other areas
  • Due to insufficient appetite to address hard engineering problems
  • Due to any of a variety of reversals or collapses in the wellbeing of civilisation.

On the other hand, it’s also possible that the pace of technological change as experienced by global society in the last 15 years – pace that is already breathtaking – could accelerate significantly in the next 15 years:

  • Due to breakthroughs in some fields (e.g. AI or nanotechnology) leading to knock-on breakthroughs in other fields
  • Due to a greater number of people around the world dedicating themselves to working on the relevant technologies, products, and services
  • Due to more people around the world reaching higher levels of education than ever before, being networked together with unprecedented productivity, and therefore being able to build more quickly on each other’s insights and findings
  • Due to new levels of application of design skills, including redesigning the user interfaces to complex products, and redesigning social systems to enable faster progress with beneficial technologies
  • Due to a growing public understanding of the potential for enormous benefits to arise from the NBIC technologies, provided resources are applied more wisely
  • Due to governments deciding to take massive positive action to increase investment in areas that are otherwise experiencing blockages – this action can be considered as akin to a nation moving onto a wartime footing.

Introducing RAFT 2035

Where there is no vision, the people perish.

That insight from the biblical book of Proverbs is as true today as ever.

Without an engaging vision of a better future, we tend to focus on the short-term and on the mundane. Our horizons shrink and our humanity withers.

RAFT 2035 offers an alternative:

  • Thanks to the thoughtful application of breakthroughs in science and technology, the future can be profoundly better than the present
  • 2035 could see an abundance of all-round human flourishing, with no-one left behind.

The word “abundance” here means that there will be enough for everyone to have an excellent quality of life. No one will lack access to healthcare, accommodation, nourishment, essential material goods, information, education, social engagement, free expression, or artistic endeavour.

RAFT 2035 envisions the possibility, by 2035, of an abundance of human flourishing in each of six sectors of human life:

  • Individual health and wellbeing
  • The wellbeing of social relationships
  • The quality of international relationships
  • Sustainable relationships with the environment
  • Humanity’s exploration of the wider cosmos beyond the earth
  • The health of our political systems.

RAFT offers clear goals for what can be accomplished in each of these six sectors by 2035 – 15 goals in total, for society to keep firmly in mind between now and that date.

The 15 goals each involve taking wise advantage of the remarkable capabilities of 21st century science and technology: robotics, biotech, neurotech, nanotech, greentech, artificial intelligence, collaboration technology, and much more.

The goals also highlight how the development and adoption of science and technology can, and must, be guided by the very best of human thinking and values.

Indeed, at the same time as RAFT 2035 upholds this vision, it is also fully aware of deep problems and challenges in each of the six sectors described.

Progress will depend on a growing number of people in all areas of society:

  • Recognising the true scale of the opportunity ahead
  • Setting aside distractions
  • Building effective coalitions
  • Taking appropriate positive actions.

These actions make up RAFT 2035. I hope you like it!

The metaphor and the acronym

The cover of RAFT 2035 depicts a raft sitting on top of waves of turbulence.

As I say in RAFT’s opening chapter, the forthcoming floods of technological and social change set in motion by the NBIC revolutions could turn our world upside down, more quickly and more brutally than we expected. When turbulent waters are bearing down fast, having a sturdy raft at hand can be the difference between life and death.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In other words, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

To spell out the “RAFT” acronym, the turbulent times ahead require:

  • A Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

What’s different about the RAFT vision

Most other political visions assume that only modest changes in the human condition will take place over the next few decades. In contrast, RAFT takes seriously the potential for large changes in the human condition – and sees these changes not only as desirable but essential.

Most other political visions are preoccupied by short term incremental issues. In contrast, RAFT highlights major disruptive opportunities and risks ahead.

Finally, most other political visions seek for society to “go back” to elements of a previous era, which is thought to be simpler, or purer, or in some other way preferable to the apparent messiness of today’s world. In contrast, RAFT offers a bold vision of creating a new, much better society – a society that builds on the existing strengths of human knowledge, skills, and relationships, whilst leaving behind those aspects of the human condition which unnecessarily limit human flourishing.

It’s an ambitious vision. But as I explain in the main chapters of the book, there are many solutions and tools at hand, ready to energise and empower a growing coalition of activists, engineers, social entrepreneurs, researchers, creatives, humanitarians, and more.

These solutions can help us all to transcend our present-day preoccupations, our unnecessary divisions, our individual agendas, and our inherited human limitations.

Going forwards, these solutions mean that, with wise choices, constraints which have long overshadowed human existence can soon be lifted:

  • Instead of physical decay and growing age-related infirmity, an abundance of health and longevity awaits us.
  • Instead of collective foolishness and blinkered failures of reasoning, an abundance of intelligence and wisdom is within our reach.
  • Instead of morbid depression and emotional alienation – instead of envy and egotism – we can achieve an abundance of mental and spiritual wellbeing.
  • Instead of a society laden with deception, abuses of power, and divisive factionalism, we can embrace an abundance of democracy – a flourishing of transparency, access, mutual support, collective insight, and opportunity for all, with no one left behind.

For more information about the book and its availability, see here. I’ll be interested to hear your feedback!

14 June 2019

Fully Automated Luxury Communism: a timely vision

I find myself in a great deal of agreement with Fully Automated Luxury Communism (“FALC”), the provocative but engaging book by Novara Media Co-Founder and Senior Editor Aaron Bastani.

It’s a book that’s going to change the conversation about the future.

It starts well, with six short vignettes, “Six characters in search of a future”. Then it moves on, with the quality consistently high, to sections entitled “Chaos under heaven”, “New travellers”, and “Paradise found”. Paradise! Yes, that’s the future which is within our grasp. It’s a future in which, as Bastani says, people will “lead fuller, expanded lives, not diminished ones”:

The comment about “diminished lives” is a criticism of at least some parts of the contemporary green movement:

To the green movement of the twentieth century this is heretical. Yet it is they who, for too long, unwisely echoed the claim that ‘small is beautiful’ and that the only way to save our planet was to retreat from modernity itself. FALC rallies against that command, distinguishing consumption under fossil capitalism – with its commuting, ubiquitous advertising, bullshit jobs and built-in obsolescence – from pursuing the good life under conditions of extreme supply. Under FALC we will see more of the world than ever before, eat varieties of food we have never heard of, and lead lives equivalent – if we so wish – to those of today’s billionaires. Luxury will pervade everything as society based on waged work becomes as much a relic of history as the feudal peasant and medieval knight.

The book is full of compelling turns of phrase that made me think to myself, “I wish I had thought of saying that”. They are phrases that are likely to be heard increasingly often from now on.

The book also contains ideas and examples that I have myself used on many occasions in my own writing and presentation over the years. Indeed, the vision and analysis in FALC has a lot in common with the vision and analysis I have offered, most recently in Sustainable Superabundance, and, in more depth, in my earlier book Transcending Politics.

Four steps in the analysis

In essence, FALC sets out a four-step problem-response-problem-response sequence:

  1. A set of major challenges facing contemporary society – challenges which undermine any notion that social development has somehow already reached a desirable “end of history”
  2. A set of technological innovations, which Bastani calls the “Third Disruption”, with the potential not only to solve the severe challenges society is facing, but also to significantly improve human life
  3. A set of structural problems with the organisation of the economy, which threaten to frustrate and sabotage the positive potential of the Third Disruption
  4. A set of changes in attitude – and political programmes to express these changes – that will allow, after all, the entirety of society to fully benefit from the Third Disruption, and attain the “luxury” paradise the book describes.

In more detail:

First, Bastani highlights five challenges that, in combination, pose (as he puts it) “threats whose scale is civilisational”:

  • Growing resource scarcity – particularly for energy, minerals and fresh water
  • Accelerating climate change and other consequences of global warming
  • Societal aging, as life expectancy increases and birth rates concurrently fall, invalidating the assumptions behind pension schemes and, more generally, the social contract
  • A growing surplus of global poor who form an ever-larger ‘unnecessariat’ (people with no economic value to contribute)
  • A new machine age which will herald ever-greater technological unemployment as progressively more physical and cognitive labour is performed by machines, rather than humans.

Second, Bastani points to a series of technological transformations that comprise an emerging “Third Disruption” (following the earlier disruptions of the Agricultural and Industrial Revoutions). These transformations apply information technology to fields such as renewable energy, food production, resource management (including asteroid mining), healthcare, housing, and education. The result of these transformations could (“if we want it”, Bastani remarks) be a society characterised by the terms “post-scarcity” and “post-work”.

Third, this brings us to the deeper problem, namely the way society puts too much priority on the profit motive.

Transcending capitalism

The economic framework known as capitalism has generated huge amounts of innovation in products and services. These innovations have taken place because entrepreneurs have been motivated to create and distribute new items for exchange and profit. But in circumstances when profits would be small, there’s less motivation to create the goods and services. To the extent that goods and services are nowadays increasingly dependent on information, this poses a problem, since information involves no intrinsic costs when it is copied from one instance to another.

Increasingly, what’s special about a product isn’t the materials from which it is composed, but the set of processes (that is, information) used to manipulate those material to create the product. Increasingly, what’s special about a service isn’t the tacit skills of the people delivering that service, but the processes (that is, information) by which any reasonably skilled person can be trained to deliver that service. All this leads to pressures for the creation of “artificial scarcity” that prohibits the copying of certain types of information.

The fact that goods and services become increasingly easy to duplicate should be seen as a positive. It should mean lower costs all round. It should mean that more people can access good quality housing, good quality education, good quality food, and good quality clean energy. It’s something that society should welcome enthusiastically. However, since profits are harder to achieve in these circumstances, many business leaders (and the hangers-on who are dependent on these business leaders) wish to erect barriers and obstacles anew. Rather than embracing post-scarcity, they wish to extent the prevalence of scarcity.

This is just one example of the “market failures” which can arise from unfettered capitalism. In my own book Sustainable Superabundance, five of the twelve chapters end with a section entitled “Beyond the profit motive”. It’s not that I view the profit motive as inherently bad. Far from it. Instead, it’s that there are many problems in letting the profit motive dominate other motivations. That’s why we need to look beyond the profit motive.

In much the same way, Bastani recognises capitalism as an essential precursor to the fully automated luxury communism he foresees. Here, as in much of his thinking, he draws inspiration from the writing of Karl Marx. Bastani notes that,

In contrast to his portrayal by critics, Marx was often lyrical about capitalism. His belief was that despite its capacity for exploitation, its compulsion to innovate – along with the creation of a world market – forged the conditions for social transformation.

Bastani quotes Marx writing as follows in 1848:

The bourgeoisie … has been the first to show what man’s activity can bring about. It has accomplished wonders far surpassing Egyptian pyramids, Roman aqueducts, and Gothic cathedrals; it has conducted expeditions that put in the shade all former Exoduses of nations and crusades.

By the way, don’t be put off by the word “communism” in the book’s title. There’s no advocacy here of a repeat of what previous self-declared communist regimes have done. Communism was not possible until the present time, since it depends upon technology having advanced to a sufficiently advanced state. Bastani explains it as follows:

While it is true that a number of political projects have labelled themselves communist over the last century, the aspiration was neither accurate nor – as we will go on to see – technologically possible. ‘Communism’ is used here for the benefit of precision; the intention being to denote a society in which work is eliminated, scarcity replaced by abundance and where labour and leisure blend into one another. Given the possibilities arising from the Third Disruption, with the emergence of extreme supply in information, labour, energy and resources, it should be viewed not only as an idea adequate to our time but impossible before now.

And to emphasise the point:

FALC is not the communism of the early twentieth century, nor will it be delivered by storming the Winter Palace.

The technologies needed to deliver a post-scarcity, post-work society – centred around renewable energy, automation and information – were absent in the Russian Empire, or indeed anywhere else until the late 1960s…

Creating communism before the Third Disruption is like creating a flying machine before the Second. You could conceive of it – and indeed no less a genius than Leonardo Da Vinci did precisely that – but you could not create it. This was not a failure of will or of intellect, but simply an inevitability of history.

Marx expected a transformation from capitalism to communism within his own lifetime. He would likely have been very surprised at the ability of capitalism to reinvent itself in the face of the many challenges and difficulties it has faced in subsequent decades. Marx’s lack of accurate prediction about the forthcoming history of capitalism is one factor people use to justify their disregard for Marxism. The question, however, is whether his analysis was merely premature rather than completely wrong. Bastani argues for the former point of view. The internal tensions of a profit-led society have caused a series of large financial and economic crashes, but have not, so far, led to an effective transition away from profit-seeking to abundance-seeking. However, Bastani argues, the stakes are nowadays so high, that continued pursuit of profits-at-all-costs cannot continue.

This brings us to the fourth phase of the argument – the really critical one. If there are problems with capitalism, what is to be done? Rather than storming any modern-day Winter Palace, where should a fervour for change best be applied?

Solutions

Bastani’s answer starts by emphasising that the technologies of the Third Disruption, by themselves, provide no guarantee of a move to a society with ample abundance. Referring to the laws of technology of Melvin Kranzberg, Bastani observes that

How technology is created and used, and to whose advantage, depends on the political, ethical and social contexts from which it emerges.

In other words, ideas and structures play a key role. To increase the chances of optimal benefits from the technologies of the Third Disruption, ideas prevalent in society will need to change.

The first change in ideas is a different attitude towards one of the dominant ideologies of our time, sometimes called neoliberalism. Bastani refers at various points to “market fundamentalism”. This is the idea that free pursuit of profits will inevitably result in the best outcome for society as a whole – that the free market is the best tool to organise the distribution of resources. In this viewpoint, regulations should be resisted, where they interfere with the ability of businesses to offer new products and services to the market. Workers’ rights should be resisted too, since they will interfere with the ability of businesses to lower wages and reassign tasks overseas. And so on.

Bastani has a list of examples of gross social failures arising from pursuit of neoliberalism. This includes the collapse in 2018 of Carillion, the construction and facilities management company. Bastani notes:

With up to 90 per cent of Carillion’s work subcontracted out, as many as 30,000 businesses faced the consequences of its ideologically driven mismanagement. Hedge funds in the City, meanwhile, made hundreds of millions from speculating on its demise.

Another example is the tragedy of the 2017 fire at the 24-storey Grenfell Tower in West London, in which 72 people perished:

The neoliberal machine has human consequences that go beyond spreadsheets and economic data. Beyond, even, in-work poverty and a life defined by paying ever higher rents to wealthy landlords and fees to company shareholders. As bad as those are they pale beside its clearest historic expression in a generation: the derelict husk of Grenfell Tower…

A fire broke which would ravage the building in a manner not seen in Britain for decades. The primary explanation for its rapid, shocking spread across the building – finished in 1974 and intentionally designed to minimise the possibility of such an event – was the installation of flammable cladding several years earlier, combined with poor safety standards and no functioning sprinklers – all issues highlighted by the residents’ Grenfell Action Group before the fire.

The cladding itself, primarily composed of polyethylene, is as flammable as petroleum. Advances in material science means we should be building homes that are safer, and more efficient, than ever before. Instead a cut-price approach to housing the poor prevails, prioritising external aesthetics for wealthier residents. In the case of Grenfell that meant corners were cut and lives were lost. This is not a minor political point and shows the very real consequences of ‘self-regulation’.

Bastani is surely right that greater effort is needed to ensure everyone understands the various failure modes of free markets. A better appreciation is overdue of the positive role that well-designed regulations can play in ensuring greater overall human flourishing in the face of corporations that would prefer to put their priorities elsewhere. The siren calls of market fundamentalism need to be resisted.

I would add, however, that a different kind of fundamentalism needs to be resisted and overcome too. This is anti-market fundamentalism. As I wrote in the chapter “Markets and fundamentalists” in Transcending Politics,

Anti-market fundamentalists see the market system as having a preeminently bad effect on the human condition. The various flaws with free markets… are so severe, say these critics, that the most important reform to pursue is to dismantle the free market system. That reform should take a higher priority than any development of new technologies – AI, genetic engineering, stem cell therapies, neuro-enhancers, and so on. Indeed, if these new technologies are deployed whilst the current free market system remains in place, it will, say these critics, make it all the more likely that these technologies will be used to oppress rather than liberate.

I believe that both forms of fundamentalism (pro-market and anti-market) need to be resisted. I look forward to wiser management of the market system, rather than dismantling it. In my view, key to this wise management is the reform and protection of a number of other social institutions that sit alongside markets – a free press, free judiciary, independent regulators, and, yes, independent politicians.

I share the view of political scientists Jacob S. Hacker and Paul Pierson, articulated in their fine 2016 book American Amnesia: Business, Government, and the Forgotten Roots of Our Prosperity, that the most important social innovation of the 20th century was the development of the mixed economy. In a mixed economy, effective governments work alongside the remarkable capabilities of the market economy, steering it and complementing it. Here’s what Hacker and Pierson have to say about the mixed economy:

The mixed economy spread a previously unimaginable level of broad prosperity. It enabled steep increases in education, health, longevity, and economic security.

These writers explain the mixed economy by an elaboration of Adam Smith’s notion of “the invisible hand”:

The political economist Charles Lindblom once described markets as being like fingers: nimble and dexterous. Governments, with their capacity to exercise authority, are like thumbs: powerful but lacking subtlety and flexibility. The invisible hand is all fingers. The visible hand is all thumbs. Of course, one wouldn’t want to be all thumbs. But one wouldn’t want to be all fingers either. Thumbs provide countervailing power, constraint, and adjustments to get the best out of those nimble fingers.

The characterisation by Hacker and Pierson of the positive role of government is, to my mind, spot on correct. It’s backed up in their book by lots of instructive episodes from American history, going all the way back to the revolutionary founders:

  • Governments provide social coordination of a type that fails to arise by other means of human interaction, such as free markets
  • Markets can accomplish a great deal, but they’re far from all-powerful. Governments ensure that suitable investment takes place of the sort that would not happen, if it was left to each individual to decide by themselves. Governments build up key infrastructure where there is no short-term economic case for individual companies to invest to create it
  • Governments defend the weak from the powerful. They defend those who lack the knowledge to realise that vendors may be on the point of selling them a lemon and then beating a hasty retreat. They take actions to ensure that social free-riders don’t prosper, and that monopolists aren’t able to take disproportionate advantage of their market dominance
  • Governments prevent all the value in a market from being extracted by forceful, well-connected minority interests, in ways that would leave the rest of society impoverished. They resist the power of “robber barons” who would impose numerous tolls and charges, stifling freer exchange of ideas, resources, and people. Therefore governments provide the context in which free markets can prosper (but which those free markets, by themselves, could not deliver).

It’s a deeply troubling development that the positive role of enlightened government is something that is poorly understood in much of contemporary public discussion. Instead, as a result of a hostile barrage of ideologically-driven misinformation, more and more people are calling for a reduction in the scope and power of government. That tendency – the tendency towards market fundamentalism – urgently needs to be resisted. But at the same time, we also need to resist the reverse tendency – the tendency towards anti-market fundamentalism – the tendency to belittle the latent capabilities of free markets.

To Bastani’s credit, he avoids advocating any total government control over planning of the economy. Instead, he offers praise for Eastern European Marxist writers such as Michał Kalecki, Włodzimierz Brus, and Kazimierz Łaski, who advocated important roles for market mechanisms in the approach to the communist society in which they all believed. Bastani comments,

[These notions were] expanded further in 1989 with Brus and Łaski claiming that under market socialism, publicly owned firms would have to be autonomous – much as they are in market capitalist systems – and that this would necessitate a socialised capital market… Rather than industrial national monoliths being lauded as the archetype of economic efficiency, the authors argued for a completely different kind of socialism declaring, ‘The role of the owner-state should be separated from the state as an authority in charge of administration … (enterprises) have to become separated not only from the state in its wider role but also from one another.’

Bastani therefore supports a separation of two roles:

  • The political task of establishing the overall direction and framework for the development of the economy
  • The operational task of creating goods and services within that framework – a task that may indeed utilise various market mechanisms.

Key in the establishment of the overall direction is to supersede society’s reliance on the GDP measure. Bastani is particularly good in his analysis of the growing shortcomings of GDP (Gross Domestic Product), and on what must be included in its replacement, which he calls an “Abundance Index”:

Initially such an index would integrate CO2 emissions, energy efficiency, the falling cost of energy, resources and labour, the extent to which UBS [Universal Basic Services] had been delivered, leisure time (time not in paid employment), health and lifespan, and self-reported happiness. Such a composite measure, no doubt adapted to a variety of regional and cultural differences, would be how we assess the performance of post-capitalist economies in the passage to FALC. This would be a scorecard for social progress assessing how successful the Third Disruption is in serving the common good.

Other policies Bastani recommends in FALC include:

  • Revised priorities for central banks – so that they promote increases of the Abundance Index, rather than simply focusing on the control of inflation
  • Step by step increases in UBS (Universal Basic Services) – rather than the UBI (Universal Basic Income) that is often advocated these days
  • Re-localisation of economies through what Bastani calls “progressive procurement and municipal protectionism”.

But perhaps the biggest recommendation Bastani makes is for the response to society’s present political issues to be a “populist” one.

Populism and its dangers

I confess that the word “populist” made me anxious. I worry about groundswell movements motivated by emotion rather than clear-sightedness. I worry about subgroups of citizens who identify themselves as “the true people” (or “the real people”) and who take any democratic victory as a mandate for them to exclude any sympathy for minority viewpoints. (“You lost. Get over it!”) I worry about demagogues who rouse runaway emotional responses by scapegoating easy targets (such as immigrants, overseas governments, transnational organisations, “experts”, “the elite”, or culturally different subgroups).

In short, I was more worried by the word “populist” than the word “communist”.

As it happens – thankfully – that’s different from the meaning of “populist” that Bastani has in mind. He writes,

For the kind of change required, and for it to last in a world increasingly at odds with the received wisdom of the past, a populist politics is necessary. One that blends culture and government with ideas of personal and social renewal.

He acknowledges that some thinkers will disagree with this recommendation:

Others, who may agree about the scale and even urgent necessity of change, will contend that such a radical path should only be pursued by a narrow technocratic elite. Such an impulse is understandable if not excusable; or the suspicion that democracy unleashes ‘the mob’ is as old as the idea itself. What is more, a superficial changing of the guard exclusively at the level of policy-making is easier to envisage than building a mass political movement – and far simpler to execute as a strategy. Yet the truth is any social settlement imposed without mass consent, particularly given the turbulent energies unleashed by the Third Disruption, simply won’t endure.

In other words, voters as a whole must be able to understand how the changes ahead, if well managed, will benefit everyone, not just in a narrow economic sense, but in the sense of liberating people from previous constraints.

I have set out similar ideas, under the term “superdemocracy”, described as follows:

A renewal of democracy in which, rather than the loudest and richest voices prevailing, the best insights of the community are elevated and actioned…

The active involvement of the entire population, both in decision-making, and in the full benefits of [technology]…

Significantly improved social inclusion and resilience, whilst upholding diversity and liberty – overcoming human tendencies towards tribalism, divisiveness, deception, and the abuse of power.

That last proviso is critical and deserves repeating: “…overcoming human tendencies towards tribalism, divisiveness, deception, and the abuse of power”. Otherwise, any movements that build popular momentum risk devouring themselves in time, in the way that the French Revolution sent Maximilien Robespierre to the guillotine, and the Bolshevik Revolution led to the deaths of many of the original revolutionaries following absurd show trials.

You’ll find no such proviso in FALC. Bastani writes,

Pride, greed and envy will abide as long as we do.

He goes on to offer pragmatic advice,

The management of discord between humans – the essence of politics – [is] an inevitable feature of any society we share with one another.

Indeed, that is good advice. We all need to become better at managing discord. However, writing as a transhumanist, I believe we can, and must, do better. The faults within human nature are something which the Third Disruption (to use Bastani’s term) will increasingly allow us to address and transcend.

Consider the question: Is it possible to significantly improve politics, over the course of, say, the next dozen years, without first significantly improving human nature?

Philosophies of politics can in principle be split into four groups, depending on the answer they give to that question:

  1. We shouldn’t try to improve human nature; that’s the route to hell
  2. We can have a better politics without any change in human nature
  3. Improving human nature will turn out to be relatively straightforward; let’s get cracking
  4. Improving human nature will be difficult but is highly desirable; we need to carefully consider the potential scenarios, with an open mind, and then make our choices.

For the avoidance of doubt, the fourth of these positions is the one I advocate. In contrast, I believe Bastani would favour the second answer – or maybe the first.

Transcending populism

(The following paragraphs are extracted from the chapter “Humans and superhumans” of my book Transcending Politics.)

We humans are sometimes angelic, yet sometimes diabolic. On occasion, we find ways to work together on a transcendent purpose with wide benefits. But on other occasions, we treat each other abominably. Not only do we go to war with each other, but our wars are often accompanied by hideous so-called “war crimes”. Our religious crusades, whilst announced in high-minded language, have involved the subjugation or extermination of hundreds of thousands of members of opposing faiths. The twentieth century saw genocides on a scale never before experienced. For a different example of viciousness, the comments attached to YouTube videos frequently show intense hatred and vitriol.

As technology puts more power in our hands, will we become more angelic, or more diabolic? Probably both, at the same time.

A nimbleness of mind can coincide with a harshness of spirit. Just because someone has more information at their disposal, that’s no guarantee the information will be used to advance beneficial initiatives. Instead, that information can be mined and contoured to support whatever course of action someone has already selected in their heart.

Great intelligence can be coupled with great knowledge, for good but also for ill. The outcome in some sorry cases is greater vindictiveness, greater manipulation, and greater enmity. Enhanced cleverness can make us experts in techniques to suppress inconvenient ideas, to distort inopportune findings, and to tarnish independent thinkers. We can find more devious ways to mislead and deceive people – and, perversely, to mislead and deceive ourselves. In this way, we could create the mother of all echo chambers. It would take only a few additional steps for obsessive human superintelligence to produce unprecedented human malevolence.

Transhumanists want to ask: can’t we find a way to alter the expression of human nature, so that we become less likely to use our new technological capabilities for malevolence, and more likely to use them for benevolence? Can’t we accentuate the angelic, whilst diminishing the diabolic?

To some critics, that’s an extremely dangerous question. If we mess with human nature, they say, we’ll almost certainly make things worse rather than better.

Far preferable, in this analysis, is to accept our human characteristics as a given, and to evolve our social structures and cultural frameworks with these fixed characteristics in mind. In other words, our focus should be on the likes of legal charters, restorative justice, proactive education, multi-cultural awareness, and effective policing.

My view, however, is that these humanitarian initiatives towards changing culture need to be complemented with transhumanist initiatives to alter the inclinations inside the human soul. We need to address nature at the same time as we address nurture. To do otherwise is to unnecessarily limit our options – and to make it more likely that a bleak future awaits us.

The good news is that, for this transhumanist task, we can take advantage of a powerful suite of emerging new technologies. The bad news is that, like all new technologies, there are risks involved. As these technologies unfold, there will surely be unforeseen consequences, especially when different trends interact in unexpected ways.

Transhumanists have long been well aware of the risks in changing the expression of human nature. Witness the words of caution baked deep into the Transhumanist Declaration. But these risks are no reason for us to abandon the idea. Instead, they are a reason to exercise care and judgement in this project. Accepting the status quo, without seeking to change human nature, is itself a highly risky approach. Indeed, there are no risk-free options in today’s world. If we want to increase our chances of reaching a future of sustainable abundance for all, without humanity being diverted en route to a new dark age, we should leave no avenue unexplored.

Transhumanists are by no means the first set of thinkers to desire positive changes in human nature. Philosophers, religious teachers, and other leaders of society have long called for humans to overcome the pull of “attachment” (desire), self-centredness, indiscipline, “the seven deadly sins” (pride, greed, lust, envy, gluttony, wrath, and sloth), and so on. Where transhumanism goes beyond these previous thinkers is in highlighting new methods that can now be used, or will shortly become available, to assist in the improvement of character.

Collectively these methods can be called “cognotech”. They will boost our all-round intelligence: emotional, rational, creative, social, spiritual, and more. Here are some examples:

  • New pharmacological compounds – sometimes called “smart drugs”
  • Gentle stimulation of the brain by a variety of electromagnetic methods – something that has been trialled by the US military
  • Alteration of human biology more fundamentally, by interventions at the genetic, epigenetic, or microbiome level
  • Vivid experiences within multi-sensory virtual reality worlds that bring home to people the likely consequences of their current personal trajectories (from both first-person and third-person points of view), and allow them to rehearse changes in attitude
  • The use of “intelligent assistance” software that monitors our actions and offers us advice in a timely manner, similar to the way that a good personal friend will occasionally volunteer wise counsel; intelligent assistants can also strengthen our positive characteristics by wise selection of background music, visual imagery, and “thought for the day” aphorisms to hold in mind.

Technological progress can also improve the effectiveness of various traditional methods for character improvement:

  • The reasons why meditation, yoga, and hypnosis can have beneficial results are now more fully understood than before, enabling major improvements in the efficacy of these practices
  • Education of all sorts can be enhanced by technology such as interactive online video courses that adapt their content to the emerging needs of each different user
  • Prompted by alerts generated by online intelligent assistants, real-world friends can connect at critical moments in someone’s life, in order to provide much-needed personal support
  • Information analytics can resolve some of the long-running debates about which diets – and which exercise regimes – are the ones that will best promote all-round health for given individuals.

The technoprogressive feedback cycle

One criticism of the initiative I’ve just outlined is that it puts matters the wrong way round.

I’ve been describing how individuals can, with the aid of technology as well as traditional methods, raise themselves above their latent character flaws, and can therefore make better contributions to the political process (either as voters or as actual politicians). In other words, we’ll get better politics as a result of getting better people.

However, an opposing narrative runs as follows. So long as our society is full of emotional landmines, it’s a lot to expect people to become more emotionally competent. So long as we live in a state of apparent siege, immersed in psychological conflict, it’s a big ask for people to give each other the benefit of the doubt, in order to develop new bonds of trust. Where people are experiencing growing inequality, a deepening sense of alienation, a constant barrage of adverts promoting consumerism, and an increasing foreboding about an array of risks to their wellbeing, it’s not reasonable to urge them to make the personal effort to become more compassionate, thoughtful, tolerant, and open-minded. They’re more likely to become angry, reactive, intolerant, and closed-minded. Who can blame them? Therefore – so runs this line of reasoning – it’s more important to improve the social environment than to urge the victims of that social environment to learn to turn the other cheek. Let’s stop obsessing about personal ethics and individual discipline, and instead put every priority on reducing the inequality, alienation, consumerist propaganda, and risk perception that people are experiencing. Instead of fixating upon possibilities for technology to rewire people’s biology and psychology, let’s hurry up and provide a better social safety net, a fairer set of work opportunities, and a deeper sense that “we’re all in this together”.

I answer this criticism by denying that it’s a one-way causation. We shouldn’t pick just a single route of influence – either that better individuals will result in a better society, or that a better society will enable the emergence of better individuals. On the contrary, there’s a two way flow of influence.

Yes, there’s such a thing as psychological brutalisation. In a bad environment, the veneer of civilisation can quickly peel away. Youngsters who would, in more peaceful circumstances, instinctively help elderly strangers to cross the road, can quickly degrade in times of strife into obnoxious, self-obsessed bigots. But that path doesn’t apply to everyone. Others in the same situation take the initiative to maintain a cheery, contemplative, constructive outlook. Environment influences the development of character, but doesn’t determine it.

Accordingly, I foresee a positive feedback cycle:

  • With the aid of technological assistance, more people – whatever their circumstances – will be able to strengthen the latent “angelic” parts of their human nature, and to hold in check the latent “diabolic” aspects
  • As a result, at least some citizens will be able to take wiser policy decisions, enabling an improvement in the social and psychological environment
  • The improved environment will, in turn, make it easier for other positive personal transformations to occur – involving a larger number of people, and having a greater impact.

One additional point deserves to be stressed. The environment that influences our behaviour involves not just economic relationships and the landscape of interpersonal connections, but also the set of ideas that fill our minds. To the extent that these ideas give us hope, we can find extra strength to resist the siren pull of our diabolic nature. These ideas can help us focus our attention on positive, life-enhancing activities, rather than letting our minds shrink and our characters deteriorate.

This indicates another contribution of transhumanism to building a comprehensively better future. By painting a clear, compelling image of sustainable abundance, credibly achievable in just a few decades, transhumanism can spark revolutions inside the human heart.

That potential contribution brings us back to similar ideas in FALC. Bastani wishes a populist transformation of the public consciousness, which includes inspiring new ideas for how everyone can flourish in a post-scarcity post-work society.

I’m all in favour of inspiring new ideas. The big question, of course, is whether these new ideas skate over important omissions that will undermine the whole project.

Next steps

I applaud FALC for the way it advances serious discussion about a potentially better future – a potentially much better future – that could be attained in just a few decades.

But just as FALC indicates a reason why communism could not be achieved before the present time, I want to indicate a reason why the FALC project could likewise fail.

Communism was impossible, Bastani says, before the technologies of the Third Disruption provided the means for sufficient abundance of energy, food, education, material goods, and so on. In turn, my view is that communism will be impossible (or unlikely) without attention being paid to the proactive transformation of human nature.

We should not underestimate the potential of the technologies of the Third Disruption. They won’t just provide more energy, food, education, and material goods. They won’t just enable people to have healthier bodies throughout longer lifespans. They will also enable all of us to attain better levels of mental and emotional health – psychological and spiritual wellbeing. If we want it.

That’s why the Abundance 2035 goals on which I am presently working contain a wider set of ambitions than feature in FALC. For example, these goals include aspirations that, by 2035,

  • The fraction of people with mental health problems will be 1% or less
  • Voters will no longer routinely assess politicians as self-serving, untrustworthy, or incompetent.

To join a discussion about the Abundance 2035 goals (and about a set of interim targets to be achieved by 2025), check out this London Futurists event taking place at Newspeak House on Monday 1st July.

To hear FALC author Aaron Bastani in discussion of his ideas, check out this Virtual Futures event, also taking place at Newspeak House, on Tuesday 25th June.

Finally, for an all-round assessment of the relevance of transhumanism to building a (much) better future, check out TransVision 2019, happening at Birkbeck College on the weekend of 6-7 July, where 22 different speakers will be sharing their insights.

7 June 2019

Feedback on what goals the UK should have in mind for 2035

Filed under: Abundance, BHAG, politics, TPUK, vision — Tags: , , , , — David Wood @ 1:56 pm

Some political parties are preoccupied with short-term matters.

It’s true that many short-term matters demand attention. But we need to take the time to consider, as well, some important longer-term risks and issues.

If we give these longer-term matters too little attention, we may wake up one morning and bitterly regret our previous state of distraction. By then, we may have missed the chance to avoid an enormous setback. It could also be too late to take advantage of what previously was a very positive opportunity.

For these reasons, the Transhumanist Party UK seeks to raise the focus of a number of transformations that could take place in the UK, between now and 2035.

Rather than having a manifesto for the next, say, five years, the Party is developing a vision for the year 2035 – a vision of much greater human flourishing.

It’s a vision in which there will be enough for everyone to have an excellent quality of life. No one should lack access to healthcare, shelter, nourishment, information, education, material goods, social engagement, free expression, or artistic endeavour.

The vision also includes a set of strategies by which the current situation (2019) could be transformed, step by step, into the desired future state (2035).

Key to these strategies is for society to take wise advantage of the remarkable capabilities of twenty-first century science and technology: robotics, biotech, neurotech, greentech, collabtech, artificial intelligence, and much more. These technologies can provide all of us with the means to live better than well – to be healthier and fitter than ever before; nourished emotionally and spiritually as well as physically; and living at peace with ourselves, the environment, and our neighbours both near and far.

Alongside science and technology, there’s a vital role that politics needs to play:

  • Action to encourage the kind of positive collaboration which might otherwise be undermined by free-riders
  • Action to adjust the set of subsidies, incentives, constraints, and legal frameworks under which we all operate
  • Action to protect the citizenry as a whole from the abuse of power by any groups with monopoly or near-monopoly status
  • Action to ensure that the full set of “externalities” (both beneficial and detrimental) of market transactions are properly considered, in a timely manner.

To make this vision more concrete, the Party wishes to identify a set of specific goals for the UK for the year 2035. At present, there are 16 goals under consideration. These goals are briefly introduced in a video:

As you can see, the video invites viewers to give their feedback, by means of an online survey. The survey collects opinions about the various goals: are they good as they stand? Too timid? Too ambitious? A bad idea? Uninteresting? Or something else?

The survey also invites ideas about other goals that should perhaps be added into the mix.

Since the survey has been launched, feedback has been accumulating. I’d like to share some of that feedback now, along with some of my own personal responses.

The most unconditionally popular goal so far

Of the 16 goals proposed, the one which has the highest number of responses “Good as it stands” is Goal 4, “Thanks to innovations in recycling, manufacturing, and waste management, the UK will be zero waste, and will have no adverse impact on the environment.”

(To see the rationale for each goal, along with ideas on measurement, the current baseline, and the strategy to achieve the goal, see the document on the Party website.)

That goal has, so far, been evaluated as “Good as it stands” by 84% of respondents.

One respondent gave this comment:

Legislation and Transparency are equally as important here, to gain the public’s trust that there is actual quantified benefits from this, or rather to de-abstractify recycling and make it more tangible and not just ‘another bin’

My response: succeeding with this goal will involve more than the actions of individuals putting materials into different recycling bins.

Research from the Stockholm Resilience Centre has identified nine “planetary boundaries” where human activity is at risk of pushing the environment into potentially very dangerous states of affairs.

For each of these planetary boundaries, the same themes emerge:

  • Methods are known that would replace present unsustainable practices with sustainable ones.
  • By following these methods, life would be plentiful for all, without detracting in any way from the potential for ongoing flourishing in the longer term.
  • However, the transition from unsustainable to sustainable practices requires overcoming very significant inertia in existing systems.
  • In some cases, what’s also required is vigorous research and development, to turn ideas for new solutions into practical realities.
  • Unfortunately, in the absence of short-term business cases, this research and development fails to receive the investment it requires.

In each case, the solution also follows the same principles. Society as a whole needs to agree on prioritising research and development of various solutions. Society as a whole needs to agree on penalties and taxes that should be applied to increasingly discourage unsustainable practices. And society as a whole needs to provide a social safety net to assist those peoples whose livelihoods are adversely impacted by these changes.

Left to its own devices, the free market is unlikely to reach the same conclusions. Instead, because it fails to assign proper values to various externalities, the market will produce harmful results. Accordingly, these are cases when society as a whole needs to constrain and steer the operation of the free market. In other words, democratic politics needs to exert itself.

2nd equal most popular goals

The 2nd equal most popular goal is Goal 7, “There will be no homelessness and no involuntary hunger”, with 74% responses judging it “Good as it stands”. Disagreeing, 11% of respondents judged it as “Too ambitious”. Here’s an excerpt from the proposed strategy to achieve this goal:

The construction industry should be assessed, not just on its profits, but on its provision of affordable, good quality homes.

Consider the techniques used by the company Broad Sustainable Building, when it erected a 57-storey building in Changsha, capital city of Hunan province in China, in just 19 working days. That’s a rate of three storeys per day. Key to that speed was the use of prefabricated units. Other important innovations in construction techniques include 3D printing, robotic construction, inspection by aerial drones, and new materials with unprecedented strength and resilience.

Similar techniques can in principle be used, not just to generate new buildings where none presently exist, but also to refurbish existing buildings – regenerating them from undesirable hangovers from previous eras into highly desirable contemporary accommodation.

With sufficient political desire, these techniques offer the promise that prices for property over the next 16 years might follow the same remarkable downwards trajectory witnessed in many other product areas – such as TVs, LCD screens, personal computers and smartphones, kitchen appliances, home robotics kits, genetic testing services, and many types of clothing…

Finally, a proportion of cases of homelessness arise, not from shortage of available accommodation, but from individuals suffering psychological issues. This element of homelessness will be addressed by the measures reducing mental health problems to less than 1% of the population.

The other 2nd equal most popular goal is Goal 3, “Thanks to improved green energy management, the UK will be carbon-neutral”, also with 74% responses judging it “Good as it stands”. In this case, most of the dissenting opinions (16%) held that the goal is “Too timid” – namely, that carbon neutrality should be achieved before 2035.

For the record, 4th equal in this ranking, with 68% unconditional positive assessment, were:

  • Goal 6: “World-class education to postgraduate level will be freely available to everyone via online access”
  • Goal 16: “The UK will be part of an organisation that maintains a continuous human presence on Mars”

Least popular goals

At the other end of this particular spectrum, three goals are currently tied as having the least popular support in the formats stated: 32%.

This includes Goal 9, “The UK will be part of a global “open borders” community of at least 25% of the earth’s population”. One respondent gave this comment:

Seems absolutely unworkable, would require other countries to have same policy, would have to all be developed countries. Massively problematic and controversial with no link to ideology of transhumanism

And here’s another comment:

No need to work for a living, no homelessness and open borders. What can go wrong?

And yet another:

This can’t happen until wealth/resource distribution is made equitable – otherwise we’d all be crammed in Bladerunner style cities. Not a desirable outcome.

My reply is that the detailed proposal isn’t for unconditional free travel between any two countries, but for a system that includes many checks and balances. As for the relevance to transhumanism, the actual relevance is to the improvement of human flourishing. Freedom of movement opens up many new opportunities. Indeed, migration has been found to have considerable net positive effects on the UK, including productivity, public finances, cultural richness, and individuals’ well-being. Flows of money and ideas in the reverse direction also benefit the original countries of the immigrants.

Another equal bottom goal, by this ranking, is Goal 10, “Voters will no longer routinely assess politicians as self-serving, untrustworthy, or incompetent”. 26% of respondents rated this as “Too ambitious”, and 11% as “Uninteresting”.

My reply in this case is that politicians in at least some other countries have a higher reputation than in the UK. These countries include Denmark (the top of the list), Switzerland, Netherlands, Luxembourg, Norway, Finland, Sweden, and Iceland.

What’s more, a number of practices – combining technological innovation with social innovation – seem capable of increasing the level of trust and respect for politicians:

  • Increased transparency, to avoid any suspicions of hidden motivations or vested interests
  • Automated real-time fact-checking, so that politicians know any distortions of the truth will be quickly pointed out
  • Encouragement of individual politicians with high ethical standards and integrity
  • Enforcement of penalties in cases when politicians knowingly pass on false information
  • Easier mechanisms for the electorate to be able to quickly “recall” a politician when they have lost the trust of voters
  • Improvements in mental health for everyone, including politicians, thereby diminishing tendencies for dysfunctional behaviour
  • Diminished power for political parties to constrain how individual politicians express themselves, allowing more politicians to speak according to their own conscience.

A role can also be explored for regular psychometric assessment of politicians.

The third goal in this grouping of the least popular is Goal 13, “Cryonic suspension will be available to all, on point of death, on the NHS”. 26% of respondents judged this as “Too ambitious”, and 11% as “A bad idea”. One respondent commented “Why not let people die when they are ready?” and other simply wrote “Mad shit”.

It’s true that there currently are many factors that discourage people from signing up for cryonics preservation. These include costs, problems arranging transport of the body overseas to a location where the storage of bodies is legal, the perceived low likelihood of a subsequent successful reanimation, lack of evidence of reanimation of larger biological organs, dislike of appearing to be a “crank”, apprehension over tension from family members (exacerbated if family members expect to inherit funds that are instead allocated to cryopreservation services), occasional mistrust over the motives of the cryonics organisations (which are sometimes alleged – with no good evidence – to be motivated by commercial considerations), and uncertainty over which provider should be preferred.

However, I foresee a big change in the public mindset when there’s a convincing demonstration of successful reanimation of larger biological organisms or organ. What’s more, as in numerous other fields of life, costs will decline and quality increase as the total number of experiences of a product or service increases. These are known as scale effects.

Goals receiving broad support

Now let’s consider a different ranking, when the votes for “Good as it stands” and “Too timid” are added together. This indicates strong overall support for the idea of the goal, with the proviso that many respondents would prefer a more aggressive timescale.

Actually this doesn’t change the results much. Compared to the goals already covered, there’s only one new entrant in the top 5, namely at position 3, with a combined positive rating of 84%. That’s for Goal 1, “The average healthspan in the UK will be at least 90 years”. 42% rated this “Good as it stands” and another 42% rated it as “Too timid”.

For the record, top equal by this ranking were Goal 3 (74% + 16%) and Goal 4 (84% + 5%).

The only other goal with a “Too timid” rating of greater than 30% was Goal 15, “Fusion will be generating at least 1% of the energy used in the UK” (32%).

The goals most actively disliked

Here’s yet another way of viewing the data: the goals which had the largest number of “A bad idea” responses.

By this measure, the goal most actively disliked (with 21% judging it “A bad idea”) was Goal 11, “Parliament will involve a close partnership with a ‘House of AI’ (or similar) revising chamber”. One respondent commented they were “wary – AI could be Stalinist in all but name in their goal setting and means”.

My reply: To be successful, the envisioned House of AI will need the following support:

  • All algorithms used in these AI systems need to be in the public domain, and to pass ongoing reviews about their transparency and reliability
  • Opaque algorithms, or other algorithms whose model of operation remain poorly understood, need to be retired, or evolved in ways addressing their shortcomings
  • The House of AI will not be dependent on any systems owned or operated by commercial entities; instead, it will be “AI of the people, by the people, for the people”.

Public funding will likely need to be allocated to develop these systems, rather than waiting for commercial companies to create them.

The second most actively disliked goal was Goal 5, “Automation will remove the need for anyone to earn money by working” (16%). Here are three comments from respondents:

Unlikely to receive support, most people like the idea of work. Plus there’s nothing the party can do to achieve this automation, depends on tech progress. UBI could be good.

What will be the purpose of humans?

It removes the need to work because their needs are being met by…. what? Universal Basic Income? Automation by itself cuts out the need for employers to pay humans to do the work but it doesn’t by itself ensure that people’s need will be met otherwise.

I’ve written on this topic many times in the past – including in Chapter 4, “Work and purpose “of my previous book, “Transcending Politics” (audio recording available here). There absolutely are political actions which can be taken, to accelerate the appropriate technological innovations, and to defuse the tensions that will arise if the fruits of technological progress end up dramatically increasing the inequality levels in society.

Note, by the way, that this goal does not focus on bringing in a UBI. There’s a lot more to it than that.

Clearly there’s work to be done to improve the communication of the underlying ideas in this case!

Goals that are generally unpopular

For a final way of ranking the data, let’s add together the votes for “A bad idea” and “Too ambitious”. This indicates ideas which are generally unpopular, in their current form of expression.

Top of this ranking, with 42%, is Goal 8, “The crime rate will have been reduced by at least 90%”. Indeed, the 42% all judged this goal as “Too ambitious”. One comment received was

Doesn’t seem within the power of any political party to achieve this, except a surveillance state

Here’s an excerpt of the strategy proposed to address this issue:

The initiatives to improve mental health, to eliminate homelessness, and to remove the need to work to earn an income, should all contribute to reducing the social and psychological pressures that lead to criminal acts.

However, even if only a small proportion of the population remain inclined to criminal acts, the overall crime rate could still remain too high. That’s because small groups of people will be able to take advantage of technology to carry out lots of crime in parallel – via systems such as “ransomware as a service” or “intelligent malware as a service”. The ability of technology to multiply human power means that just a few people with criminal intent could give rise to large amounts of crime.

That raises the priority for software systems to be highly secure and reliable. It also raises the priority of intelligent surveillance of the actions of people who might carry out crimes. This last measure is potentially controversial, since it allows part of the state to monitor citizens in a way that could be considered deeply intrusive. For this reason, access to this surveillance data will need to be restricted to trustworthy parts of the overall public apparatus – similar to the way that doctors are trusted with sensitive medical information. In turn, this highlights the importance of initiatives that increase the trustworthiness of key elements of our national infrastructure.

On a practical basis, initiatives to understand and reduce particular types of crime should be formed, starting with the types of crime (such as violent crime) that have the biggest negative impact on people’s lives.

Second in this ranking of general unpopularity, at 37%, is Goal 13, on cryonics, already mentioned above.

Third, at 32%, is Goal 11, on the House of AI, also already mentioned.

Suggestions for other goals

Respondents offered a range of suggestions for other goals that should be included. Here are a sample, along with brief replies from me:

Economic growth through these goals needs to be quantified somehow.

I’m unconvinced that economic growth needs to be prioritised. Instead, what’s important is agreement on a more appropriate measure to replace the use of GDP. That could be a good goal to consider.

Support anti-ageing research, gene editing research, mind uploading tech, AI alignment research, legalisation of most psychedelics

In general the goals have avoided targeting technology for technology’s sake. Instead, technology is introduced only because it supports the goals of improved overall human flourishing.

I think there should be a much greater focus in our education system on developing critical thinking skills, and a more interdisciplinary approach to subjects should be considered. Regurgitating information is much less important in a technologically advanced society where all information is a few clicks away and our schooling should reflect that.

Agreed: the statement of the education goal should probably be reworded to take these points into account.

A new public transport network; Given advances in technology regarding AI and electrical vehicles, a goal on par with others you’ve listed here would be to develop a transport system to replace cars with a decentralised public transportation network, whereby ownership of cars is replaced with the use of automated vehicles on a per journey basis, thus promoting better use of resources and driving down pollution, alongside hopefully reducing vehicular incidents.

That’s an interesting suggestion. I wonder how others think about it?

Routine near-earth asteroid mining to combat earthside resource depletion.

Asteroid mining is briefly mentioned in Goal 4, on recycling and zero waste.

Overthrow of capitalism and class relations.

Ah, I would prefer to transcend capitalism than to overthrow it. I see two mirror problems in discussing the merits of free markets: pro-market fundamentalism, and anti-market fundamentalism. I say a lot more on that topic in Chapter 9, Markets and fundamentalism”, of my book “Transcending Politics”.

The right to complete freedom over our own bodies should be recognised in law. We should be free to modify our bodies and minds through e.g. implants, drugs, software, bioware, as long as there is no significant risk of harm to others.

Yes, I see the value of including such a goal. We’ll need work to explore what’s meant by “risk of harm to others”.

UK will be part of the moon-shot Human WBE [whole brain emulation] project after being successful in supporting the previous Mouse WBE moon-shot project.

Yes, that’s an interesting suggestion too. Personally I see the WBE project as being longer-term, but hey, that may change!

Achieving many of the laudable goals rests on reshaping the current system of capitalism, but that itself is not a goal. It should be.

I’m open to suggestions for wording on this, to make it measurable.

Deaths due to RTA [road traffic accidents] cut to near zero

That’s another interesting suggestion. But it may not be on the same level as some of the existing ones. I’m open to feedback here!

Next steps

The Party is very grateful for the general feedback received so far, and looks forward to receiving more!

Discussion can also take place on the Party’s Discourse, https://discourse.transhumanistparty.org.uk/. Anyone is welcome to create an account on that site and become involved in the conversations there.

Some parts of the Discourse are reserved for paid-up members of the Party. It will be these members who take the final decisions as to which goals to prioritise.

Older Posts »

Blog at WordPress.com.