dw2

3 November 2022

Four options for avoiding an AI cataclysm

Let’s consider four hard truths, and then four options for a solution.

Hard truth 1: Software has bugs.

Even when clever people write the software, and that software passes numerous verification tests, any complex software system generally still has bugs. If the software encounters a circumstance outside its verification suite, it can go horribly wrong.

Hard truth 2: Just because software becomes more powerful, that won’t make all the bugs go away.

Newer software may run faster. It may incorporate input from larger sets of training data. It may gain extra features. But none of these developments mean the automatic removal of subtle errors in the logic of the software, or shortcomings in its specification. It might still reach terrible outcomes – just quicker than before!

Hard truth 3: As AI becomes more powerful, there will be more pressure to deploy it in challenging real-world situations.

Consider the real-time management of:

  • Complex arsenals of missiles, anti-missile missiles, and so on
  • Geoengineering interventions, which are intended to bring the planet’s climate back from the brink of a cascade of tipping points
  • Devious countermeasures against the growing weapons systems of a group (or nation) with a dangerously unstable leadership
  • Social network conversations, where changing sentiments can have big implications for electoral dynamics or for the perceived value of commercial brands
  • Ultra-hot plasmas inside whirling magnetic fields in nuclear fusion energy generators
  • Incentives for people to spend more money than is wise, on addictive gambling sites
  • The buying and selling of financial instruments, to take advantage of changing market sentiments.

In each case, powerful AI software could be a very attractive option. A seductive option. Especially if it has been written by clever people, and appears to have a good track record of delivering results.

Until it goes wrong. In which case the result could be cataclysmic. (Accidental nuclear war. The climate walloped past a tipping point in the wrong direction. Malware going existentially wrong. Partisan outrage propelling a psychological loose cannon over the edge. Easy access to weapons of mass destruction. Etc.)

Indeed, the real risk of AI cataclysm – as opposed to the Hollywood version of any such risk – is that an AI system may acquire so much influence over human society and our surrounding environment that a mistake in that system could cataclysmically reduce human wellbeing all over the world. Billions of lives could be extinguished, or turned into a very pale reflection of their present state.

Such an outcome could arise in any of four ways – four catastrophic error modes. In brief, these are:

  1. Implementation defect
  2. Design defect
  3. Design overridden
  4. Implementation overridden.

Hard truth 4: There are no simple solutions to the risks described above.

What’s more, people who naively assume that a simple solution can easily be put in place (or already exists) are making the overall situation worse. They encourage complacency, whereas greater attention is urgently needed.

But perhaps you disagree?

That’s the context for the conversation in Episode 11 of the London Futurists Podcast, which was published yesterday morning.

In just thirty minutes, that episode dug deep into some of the ideas in my recent book The Singularity Principles. Co-host Calum Chace and I found plenty on which to agree, but had differing opinions on one of the most important questions.

Calum listed three suggestions that people sometimes make for how the dangers of potentially cataclysmic AI might be handled.

In response, I described a different approach – something that Calum said would be a fourth idea for a solution. As you can hear from the recording of the podcast, I evidently left him unconvinced.

Therefore, I’d like to dig even deeper.

Option 1: Humanity gets lucky

It might be the case that AI software that is smart enough, will embody an unshakeable commitment toward humanity having the best possible experience.

Such software won’t miscalculate (after all, it is superintelligent). If there are flaws in how it has been specified, it will be smart enough to notice these flaws, rather than stubbornly following through on the letter of its programming. (After all, it is superintelligent.)

Variants of this wishful thinking exist. In some variants, what will guarantee a positive outcome isn’t just a latent tendency of superintelligence toward superbenevolence. It’s the invisible hand of the free market that will guide consumer choices away from software that might harm users, toward software that never, ever, ever goes wrong.

My response here is that software which appears to be bug free can, nevertheless, harbour deep mistakes. It may be superintelligent, but that doesn’t mean it’s omniscient or infallible.

Second, software which is bug free may be monstrously efficient at doing what some of its designers had in mind – manipulating consumers into actions which increase the share price of a given corporation, despite all the externalities arising.

Moreover, it’s too much of a stretch to say that greater intelligence always makes your wiser and kinder. There are plenty of dreadful counterexamples, from humans in the worlds of politics, crime, business, academia, and more. Who is to say that a piece of software with an IQ equivalent to 100,000 will be sure to treat us humans any better than we humans sometimes treat swarms of insects (e.g. ant colonies) that get in our way?

Do you feel lucky? My view is that any such feeling, in these circumstances, is rash in the extreme.

Option 2: Safety engineered in

Might a team of brilliant AI researchers, Mary and Flo (to make up a couple of names), devise a clever method that will ensure their AI (once it is built) never harms humanity?

Perhaps the answer lies in some advanced mathematical wizardry. Or in chiselling a 21st century version of Asimov’s Laws of Robotics into the chipsets at the heart of computer systems. Or in switching from “correlation logic” to “causation logic”, or some other kind of new paradigm in AI systems engineering.

Of course, I wish Mary and Flo well. But their ongoing research won’t, by itself, prevent lots of other people releasing their own unsafe AI first. Especially when these other engineers are in a hurry to win market share for their companies.

Indeed, the considerable effort being invested by various researchers and organisations in a search for a kind of fix for AI safety is, arguably, a distraction from a sober assessment of the bigger picture. Better technology, better product design, better mathematics, and better hardware can all be part of the full solution. But that full solution also needs, critically, to include aspects of organisational design, economic incentives, legal frameworks, and political oversight. That’s the argument I develop in my book. We ignore these broader forces at our peril.

Option 3: Humans merge with machines

If we can’t beat them, how about joining them?

If human minds are fused into silicon AI systems, won’t the good human sense of these minds counteract any bugs or design flaws in the silicon part of the hybrid formed?

With such a merger in place, human intelligence will automatically be magnified, as AI improves in capability. Therefore, we humans wouldn’t need to worry about being left behind. Right?

I see two big problems with this idea. First, so long as human intelligence is rooted in something like the biology of the brain, the mechanisms for any such merger may only allow relatively modest increases in human intelligence. Our biological brains would be bottlenecks that constrain the speed of progress in this hybrid case. Compared to pure AIs, the human-AI hybrid would, after all, be left behind in this intelligence race. So much for humans staying in control!

An even bigger problem is the realisation that a human with superhuman intelligence is likely to be at least as unpredictable and dangerous as an AI with superhuman intelligence. The magnification of intelligence will allow that superhuman human to do all kinds of things with great vigour – settling grudges, acting out fantasies, demanding attention, pursuing vanity projects, and so on. Recall: power tends to corrupt. Such a person would be able to destroy the earth. Worse, they might want to do so.

Another way to state this point is that, just because AI elements are included inside a person, that won’t magically ensure that these elements become benign, or are subject to the full control of the person’s best intentions. Consider as comparisons what happens when biological viruses enter a person’s body, or when a cancer grows there. In neither case does the intruding element lose its ability to cause damage, just on account of being part of a person who has humanitarian instincts.

This reminds me of the statement that is sometimes heard, in defence of accelerating the capabilities of AI systems: “I am not afraid of artificial intelligence. I am afraid of human stupidity”.

In reality, what we need to fear is the combination of imperfect AI and imperfect humanity.

The conclusion of this line of discussion is that we need to do considerably more than enable greater intelligence. We also need to accelerate greater wisdom – so that any beings with superhuman intelligence will operate truly beneficently.

Option 4: Greater wisdom

The cornerstone insight of ethics is that, just because we can do something, and indeed may even want to do that thing, it doesn’t mean we should do that thing.

Accordingly, human societies since prehistory have placed constraints on how people should behave.

Sometimes, moral sanction is sufficient: people constrain their actions in deference to public opinion. In other cases, restrictions are codified into laws and regulations.

Likewise, just because a corporation could boost its profits by releasing a new version of its AI software, that doesn’t mean it should release that software.

But what is the origin of these “should” imperatives? And how do we resolve conflicts, when two different groups of people champion two different sets of ethical intuitions?

Where can we find a viable foundation for ethical restrictions – something more solid than “we’ve always done things like this” or “this feels right to me” or “we need to submit to the dictates in our favourite holy scripture”?

Welcome to the world of philosophy.

It’s a world that, according to some observers, has made little progress over the centuries. People still argue over fundamentals. Deontologists square off against consequentialists. Virtue ethicists stake out a different position.

It’s a world in which it is easier to poke holes in the views held by others, rather than defending a consistent view of your own.

But it’s my position that the impending threat of cataclysmic AI impels us to reach a wiser agreement.

It’s like how the devastation of the Covid pandemic impelled society to find significantly quicker ways to manufacture, verify, and deploy vaccines.

It’s like how society can come together, remarkably, in a wartime situation, notwithstanding the divisions that previously existed.

In the face of the threats of technology beyond our control, minds should focus, with unprecedented clarity. We’ll gradually build a wider consensus in favour of various restrictions and, yes, in favour of various incentives.

What’s your reaction? Is option 4 simply naïve?

Practical steps forward

Rather than trying to “boil the ocean” of philosophical disputes over contrasting ethical foundations, we can, and should, proceed in a kaizen manner.

To start with, we can give our attention to specific individual questions:

  • What are the circumstances when we should welcome AI-powered facial recognition software, and when should we resist it?
  • What are the circumstances when we should welcome AI systems that supervise aspects of dangerous weaponry?
  • What are the circumstances that could transform AI-powered monitoring systems from dangerous to helpful?

As we reach some tentative agreements on these individual matters, we can take the time to highlight principles with potential wider applicability.

In parallel, we can revisit some of the agreements (explicit and implicit) for how we measure the health of society and the liberties of individuals:

  • The GDP (Gross Domestic Product) statistics that provide a perspective on economic activities
  • The UDHR (Universal Declaration of Human Rights) statement that was endorsed in the United Nations General Assembly in 1948.

I don’t deny it will be hard to build consensus. It will be even harder to agree how to enforce the guidelines arising – especially in light of the wretched partisan conflicts that are poisoning the political processes in a number of parts of the world.

But we must try. And with some small wins under our belt, we can anticipate momentum building.

These are some of the topics I cover in the closing chapters of The Singularity Principles:

I by no means claim to know all the answers.

But I do believe that these are some of the most important questions to address.

And to help us make progress, something that could help us is – you guessed it – AI. In the right circumstances, AI can help us think more clearly, and can propose new syntheses of our previous ideas.

Thus today’s AI can provide stepping stones to the design and deployment of better, safer, wiser AI tomorrow. That’s provided we maintain human oversight.

Footnotes

The image above includes a design by Pixabay user Alexander Antropov, used with thanks.

See also this article by Calum in Forbes, Taking Back Control Of The Singularity.

8 June 2022

Pre-publication review: The Singularity Principles

Filed under: books, Singularity, Singularity Principles — Tags: — David Wood @ 9:23 am

I’ve recently been concentrating on finalising the content of my forthcoming new book, The Singularity Principles.

The reasons why I see this book as both timely and necessary are explained in the extract, below, taken from the introduction to the book

This link provides pointers to the full text of every chapter in the book. (Or use the links in the listing below of the extended table of contents.)

Please get in touch with me if you would prefer to read the pre-publication text in PDF format, rather than on the online HTML pages linked above.

At this stage, I will gratefully appreciate any feedback:

  • Aspects of the book that I should consider changing
  • Aspects of the book that you particularly like.

Feedback on any parts of the book will be welcome. It’s by no means necessary for you to read the entire text. (However, I hope you will find it sufficiently interesting that you will end up reading more than you originally planned…)

By the way, it’s a relatively short book, compared to some others I’ve written. The wordcount is a bit over 50 thousand words. That works out at around 260 pages of fairly large text on 5″x8″ paper.

I will also appreciate any commendations or endorsements, which I can include with the publicity material for the book, to encourage more people to pay attention to it.

The timescale I have in mind: I will release electronic and physical copies of the book some time early next month (July), followed up soon afterward by an audio version.

Therefore, if you’re thinking of dipping into any chapters to provide feedback and/or endorsements, the sooner the better!

Thanks in anticipation!

Preface

This book is dedicated to what may be the most important concept in human history, namely, the Singularity – what it is, what it is not, the steps by which we may reach it, and, crucially, how to make it more likely that we’ll experience a positive singularity rather than a negative singularity.

For now, here’s a simple definition. The Singularity is the emergence of Artificial General Intelligence (AGI), and the associated transformation of the human condition. Spoiler alert: that transformation will be profound. But if we’re not paying attention, it’s likely to be profoundly bad.

Despite the importance of the concept of the Singularity, the subject receives nothing like the attention it deserves. When it is discussed, it often receives scorn or ridicule. Alas, you’ll hear sniggers and see eyes rolling.

That’s because, as I’ll explain, there’s a kind of shadow around the concept – an unhelpful set of distortions that make it harder for people to fully perceive the real opportunities and the real risks that the Singularity brings.

These distortions grow out of a wider confusion – confusion about the complex interplay of forces that are leading society to the adoption of ever-more powerful technologies, including ever-more powerful AI.

It’s my task in this book to dispel the confusion, to untangle the distortions, to highlight practical steps forward, and to attract much more serious attention to the Singularity. The future of humanity is at stake.

Let’s start with the confusion.

Confusion, turbulence, and peril

The 2020s could be called the Decade of Confusion. Never before has so much information washed over everyone, leaving us, all too often, overwhelmed, intimidated, and distracted. Former certainties have dimmed. Long-established alliances have fragmented. Flurries of excitement have pivoted quickly to chaos and disappointment. These are turbulent times.

However, if we could see through the confusion, distraction, and intimidation, what we should notice is that human flourishing is, potentially, poised to soar to unprecedented levels. Fast-changing technologies are on the point of providing a string of remarkable benefits. We are near the threshold of radical improvements to health, nutrition, security, creativity, collaboration, intelligence, awareness, and enlightenment – with these improvements being available to everyone.

Alas, these same fast-changing technologies also threaten multiple sorts of disaster. These technologies are two-edged swords. Unless we wield them with great skill, they are likely to spin out of control. If we remain overwhelmed, intimidated, and distracted, our prospects are poor. Accordingly, these are perilous times.

These dual future possibilities – technology-enabled sustainable superabundance, versus technology-induced catastrophe – have featured in numerous discussions that I have chaired at London Futurists meetups going all the way back to March 2008.

As these discussions have progressed, year by year, I have gradually formulated and refined what I now call the Singularity Principles. These principles are intended:

  • To steer humanity’s relationships with fast-changing technologies,
  • To manage multiple risks of disaster,
  • To enable the attainment of remarkable benefits,
  • And, thereby, to help humanity approach a profoundly positive singularity.

In short, the Singularity Principles are intended to counter today’s widespread confusion, distraction, and intimidation, by providing clarity, credible grounds for hope, and an urgent call to action.

This time it’s different

I first introduced the Singularity Principles, under that name and with the same general format, in the final chapter, “Singularity”, of my 2021 book Vital Foresight: The Case for Active Transhumanism. That chapter is the culmination of a 642 page book. The preceding sixteen chapters of that book set out at some length the challenges and opportunities that these principles need to address.

Since the publication of Vital Foresight, it has become evident to me that the Singularity Principles require a short, focused book of their own. That’s what you now hold in your hands.

The Singularity Principles is by no means the only new book on the subject of the management of powerful disruptive technologies. The public, thankfully, are waking up to the need to understand these technologies better, and numerous authors are responding to that need. As one example, the phrase “Artificial Intelligence”, forms part of the title of scores of new books.

I have personally learned many things from some of these recent books. However, to speak frankly, I find myself dissatisfied by the prescriptions these authors have advanced. These authors generally fail to appreciate the full extent of the threats and opportunities ahead. And even if they do see the true scale of these issues, the recommendations these authors propose strike me as being inadequate.

Therefore, I cannot keep silent.

Accordingly, I present in this new book the content of the Singularity Principles, brought up to date in the light of recent debates and new insights. The book also covers:

  • Why the Singularity Principles are sorely needed
  • The source and design of these principles
  • The significance of the term “Singularity”
  • Why there is so much unhelpful confusion about “the Singularity”
  • What’s different about the Singularity Principles, compared to recommendations of other analysts
  • The kinds of outcomes expected if these principles are followed
  • The kinds of outcomes expected if these principles are not followed
  • How you – dear reader – can, and should, become involved, finding your place in a growing coalition
  • How these principles are likely to evolve further
  • How these principles can be put into practice, all around the world – with the help of people like you.

The scope of the Principles

To start with, the Singularity Principles can and should be applied to the anticipation and management of the NBIC technologies that are at the heart of the current, fourth industrial revolution. NBIC – nanotech, biotech, infotech, and cognotech – is a quartet of four interlinked technological disruptions which are likely to grow significantly stronger as the 2020s unfold. Each of these four technological disruptions has the potential to fundamentally transform large parts of the human experience.

However, the same set of principles can and should also be applied to the anticipation and management of the core technology that will likely give rise to a fifth industrial revolution, namely the technology of AGI (artificial general intelligence), and the rapid additional improvements in artificial superintelligence that will likely follow fast on the footsteps of AGI.

The emergence of AGI is known as the technological singularity – or, more briefly, as the Singularity.

In other words, the Singularity Principles apply both:

  • To the longer-term lead-up to the Singularity, from today’s fast-improving NBIC technologies,
  • And to the shorter-term lead-up to the Singularity, as AI gains more general capabilities.

In both cases, anticipation and management of possible outcomes will be of vital importance.

By the way – in case it’s not already clear – please don’t expect a clever novel piece of technology, or some brilliant technical design, to somehow solve, by itself, the challenges posed by NBIC technologies and AGI. These challenges extend far beyond what could be wrestled into submission by some dazzling mathematical wizardry, by the incorporation of an ingenious new piece of silicon at the heart of every computer, or by any other “quick fix”. Indeed, the considerable effort being invested by some organisations in a search for that kind of fix is, arguably, a distraction from a sober assessment of the bigger picture.

Better technology, better product design, better mathematics, and better hardware can all be part of the full solution. But that full solution also needs, critically, to include aspects of organisational design, economic incentives, legal frameworks, and political oversight. That’s the argument I develop in the chapters ahead.

Extended table of contents

For your convenience, here’s a listing of the main section headings for all the chapters in this book.

0. Preface

  • Confusion, turbulence, and peril
  • This time it’s different
  • The scope of the Principles
  • Collective insight
  • The short form of the Principles
  • The four areas covered by the Principles
  • What lies ahead

1. Background: Ten essential observations

  • Tech breakthroughs are unpredictable (both timing and impact)
  • Potential complex interactions make prediction even harder
  • Changes in human attributes complicate tech changes
  • Greater tech power enables more devastating results
  • Different perspectives assess “good” vs. “bad” differently
  • Competition can be hazardous as well as beneficial
  • Some tech failures would be too drastic to allow recovery
  • A history of good results is no guarantee of future success
  • It’s insufficient to rely on good intentions
  • Wishful thinking predisposes blindness to problems

2. Fast-changing technologies: risks and benefits

  • Technology risk factors
  • Prioritising benefits?
  • What about ethics?
  • The transhumanist stance

2.1 Special complications with artificial intelligence

  • Problems with training data
  • The black box nature of AI
  • Interactions between multiple algorithms
  • Self-improving AI
  • Devious AI
  • Four catastrophic error modes
  • The broader perspective

2.2 The AI Control Problem

  • The gorilla problem
  • Examples of dangers with uncontrollable AI
  • Proposed solutions (which don’t work)
  • The impossibility of full verification
  • Emotion misses the point
  • No off switch
  • The ineffectiveness of tripwires
  • Escaping from confinement
  • The ineffectiveness of restrictions
  • No automatic super ethics
  • Issues with hard-wiring ethical principles

2.3 The AI Alignment Problem

  • Asimov’s Three Laws
  • Ethical dilemmas and trade-offs
  • Problems with proxies
  • The gaming of proxies
  • Simple examples of profound problems
  • Humans disagree
  • No automatic super ethics (again)
  • Other options for answers?

2.4 No easy solutions

  • No guarantees from the free market
  • No guarantees from cosmic destiny
  • Planet B?
  • Humans merging with AI?
  • Approaching the Singularity

3. What is the Singularity?

  • Breaking down the definition
  • Four alternative definitions
  • Four possible routes to the Singularity
  • The Singularity and AI self-awareness
  • Singularity timescales
  • Positive and negative singularities
  • Tripwires and canary signals
  • Moving forward

3.1 The Singularitarian Stance

  • AGI is possible
  • AGI could happen within just a few decades
  • Winner takes all
  • The difficulty of controlling AGI
  • Superintelligence and superethics
  • Not the Terminator
  • Opposition to the Singularitarian Stance

3.2 A complication: the Singularity Shadow

  • Singularity timescale determinism
  • Singularity outcome determinism
  • Singularity hyping
  • Singularity risk complacency
  • Singularity term overloading
  • Singularity anti-regulation fundamentalism
  • Singularity preoccupation
  • Looking forward

3.3 Bad reasons to deny the Singularity

  • The denial of death
  • How special is the human mind?
  • A credible positive vision

4. The question of urgency

  • Factors causing AI to improve
  • 15 options on the table
  • The difficulty of measuring progress
  • Learning from Christopher Columbus
  • The possibility of fast take-off

5. The Singularity Principles in depth

5.1 Analysing goals and potential outcomes

  • Question desirability
  • Clarify externalities
  • Require peer reviews
  • Involve multiple perspectives
  • Analyse the whole system
  • Anticipate fat tails

5.2 Desirable characteristics of tech solutions

  • Reject opacity
  • Promote resilience
  • Promote verifiability
  • Promote auditability
  • Clarify risks to users
  • Clarify trade-offs

5.3 Ensuring development takes place responsibly

  • Insist on accountability
  • Penalise disinformation
  • Design for cooperation
  • Analyse via simulations
  • Maintain human oversight

5.4 Evolution and enforcement

  • Build consensus regarding principles
  • Provide incentives to address omissions
  • Halt development if principles are not upheld
  • Consolidate progress via legal frameworks

6. Key success factors

  • Public understanding
  • Persistent urgency
  • Reliable action against noncompliance
  • Public funding
  • International support
  • A sense of inclusion and collaboration

7. Questions arising

7.1 Measuring human flourishing

  • Some example trade-offs
  • Updating the Universal Declaration of Human Rights
  • Constructing an Index of Human and Social Flourishing

7.2 Trustable monitoring

  • Moore’s Law of Mad Scientists
  • Four projects to reduce the dangers of WMDs
  • Detecting mavericks
  • Examples of trustable monitoring
  • Watching the watchers

7.3 Uplifting politics

  • Uplifting regulators
  • The central role of politics
  • Toward superdemocracy
  • Technology improving politics
  • Transcending party politics
  • The prospects for political progress

7.4 Uplifting education

  • Top level areas of the Vital Syllabus
  • Improving the Vital Syllabus

7.5 To AGI or not AGI?

  • Global action against the creation of AGI?
  • Possible alternatives to AGI?
  • A dividing line between AI and AGI?
  • A practical proposal

7.6 Measuring progress toward AGI

  • Aggregating expert opinions
  • Metaculus predictions
  • Alternative canary signals for AGI
  • AI index reports

7.7. Growing a coalition of the willing

  • Risks and actions

Image credit

The draft book cover shown above includes a design by Pixabay member Ebenezer42.

15 May 2022

A day in the life of Asimov, 2045

Filed under: vision — Tags: , , — David Wood @ 2:39 pm

“Gosh, that’s a hard question”, stuttered Asimov. “I’m… not quite sure which approach to try”.

Asimov’s tutor paused for a moment, then gave a gentle chuckle of encouragement.

“Well,” it offered, with a broad smile, “if you don’t know which approach to try, do you know which approaches you don’t want to try?”

That shift of perspective was just what Asimov needed. A few minutes later, he was making swift progress on a DeepMath question that had previously seemed nigh impossible. Once again, Asimov marvelled at the skills of the tutor. The tutor knew how to bring out the best of Asimov’s thinking skills. And that was just the start of its coaching abilities.

Asimov was midway through the morning’s training session. Training sessions were mandated for everyone over the age of three. They started gradually at first, for the younger children, but from the age of ten onward, everyone was expected to attend for training on seventy-two days each year.

Asimov recalled the popular saying: 20% of the days, humans attend to AGI, and AGI attends to humans 100% of the days.

Asimov also knew well the four reasons why this training system existed, and why people were happy to participate. First, if someone failed to participate, or performed poorer than expected during the training, their privileges were gradually withdrawn. They could spend less time in the latest virtual universes. When travelling in the base world, their speeds were restricted, so it took longer to move, for example, from Cambridge to Lagos. The food they were served was slightly less tasty than normal. And so on.

Second, the training was so wonderfully engaging. The challenges it posed differed from what could be obtained in non-training environments. Moreover, it was full of surprises. Whenever Asimov thought he could predict the content of the next day’s training session, he was invariably delighted by unexpected twists and turns. It was the same for everyone he knew. No-one regretted having to take time out of their many other activities to attend training. Instead, they eagerly looked forward to it, every time.

The tutors provided exercises for each participant that were well matched to their previous knowledge, skills, experiences, and temperament. Good results required significant effort, but that effort was well within each person’s capacity. Normally, a training session would complete after three and a half hours in the morning, and another three and a half hours in the afternoon. Occasionally, if the participant had been distracted or disengaged, a session might need to be extended for up to two more hours in an evening session. So long as that concluded satisfactorily, no loss of privileges would result.

Asimov felt pride in the fact that he had never been required to stay for longer than the minimal seven hours in a day. His concentration was excellent, he told himself…

And then he broke off his reverie, remembering that he had to solve another DeepMath puzzle. DeepMath had been discovered by AIs in the 2030s. Humans such as Ramanujan had sometimes come close to it in the past, but AIs made it much more approachable.

There was another pleasant surprise during the day’s lunch break. Angela, his partner for the last two years, joined him for the meal. Asimov noticed that she looked particularly mischievous on this occasion. “What’s on your mind”, he asked. “Oh, I’ll tell you this evening. Assuming you’re a good student and the AGI lets you out on time!” she joked.

At the age of 85, Angela was more than sixty years older than Asimov. His friends and family had been sceptical about the relationship at first. Even his big brother Byron, normally so supportive, had doubted whether it could last. “She’s old enough to be your grandmother”, he had scolded. “Indeed, she has a grandson who is older than you!”

But the wide use of rejuvenation therapies over the last fifteen years meant that octogenarians nowadays looked, and lived, as healthily as much younger people. The relationship had gone from strength to strength. It was a real triumph of complementarity, Asimov thought. And a triumph of medical technology. Most of all, it was a triumph of two remarkable people, enabled to live life to the full.

The afternoon training session focused on survival skills. That was the third reason these sessions were so important. Could humans cope in the event that the AGI stopped functioning, or disappeared off into some parallel dimension? Asimov needed to show that, without using any modern technology, he could gather twigs and then set them on fire, in order to cook a meal of mushrooms and root vegetables.

As he threw himself into that exercise, Asimov wondered whether he was contributing, at that moment, to the fourth aspect of the training. The AGI lacked sentience. There was no consciousness inside that vast digital brain. Aspects of the training were designed, it was said, for the AGI to learn things from human reactions that it could not directly experience itself. Asimov wasn’t sure he entirely believed that theory, but he was gratified to think that, in some aspects, his mind exceeded that of the AGI.

“So, what is it, my ancient wonder?” Asimov asked Angela, who was waiting for him as he exited the training. “What great adventure are you dreaming up this time?”

“My menopause reversal has been completed”, she replied. “It’s time for us to make a baby! Can you imagine what a combination of the two of us would be like?”

Asimov had another question. “But wasn’t your last pregnancy, back in the 1990s, really difficult for you?”

Angela gave a smile that was even more mischievous. “What would you say, dear boy, to ectogenesis? These artificial wombs are completely reliable these days.”

“Gosh, that’s a hard question”, stuttered Asimov. “I’m… not quite sure what to think.”

Footnote

This short story was submitted as part of my entry to the competition described here. For some more details of the world envisioned, this article has answers to 13 related questions.

The image at the top of this page includes a design by Pixabay member OpenClipart-Vectors.

A day in the life of Patricia, 2045

Filed under: vision — Tags: , , — David Wood @ 2:14 pm

The music started quietly, and gradually became louder. Patricia’s lips formed into a warm smile of recognition, as she roused from her sleep. That music meant only one thing: her great grandson, Byron, was calling her.

Patricia would normally already be awake at this time of the morning. But last night, she had been playing the latest version of 4D Scrabble with some neighbours in her accommodation block. This new release had been endlessly fascinating, provoking lots of laughter and good-spirited competitive rivalry. It’s marvellous how the software behind 4D Scrabble keeps improving, Patricia thought to herself. The group had finally called it a night at three thirty in the morning.

Her mindphone knew not to disturb her when she was sleeping, unless in emergencies, or for special exceptions. Byron was one of these exceptions. The music that preceded his call had been Byron’s favourite in 2026 – one of the first songs entirely written by an AI to top the hit parade. For his call-ahead music, Byron used a version of that song he had adapted by himself, reflecting some of the quirks of his personality.

Hello young man, she directed her thoughts into the mindphone. To what do I owe the pleasure of this call?

But Patricia already knew the answer. This was no ordinary day. It was a day she had never expected to experience, during the majority of her long life.

Happy Birthday Great Grandma! The thoughts appeared deep inside Patricia’s head, via a mechanism that still seemed magical to her. 115 years young today! Congratulations!

Byron’s voice was joined by several others, from her extended family. Patricia reached for her mindglasses and put them on, in order to add video to the experience.

Don’t forget there’s a big party for you this evening, continued Byron. And we have arranged a special virtual concert for you before that. The performers will be a surprise, but you can expect the best ever simulations of many of your old favourites!

Patricia had an idea what to expect. Her family had organised similar concerts for her in the past. It had seemed to her she had been sitting right next to the Glenn Miller Orchestra, or to Bill Haley and the Comets, or – especially delightful – a youthful-looking Tom Jones as he belted out passionate versions of his famous songs. Each time, the experience had been splendidly different.

But will I have time for my golf game later this morning? Patricia already had plans of her own. Don’t worry, everything has been scheduled perfectly, came the reply. Thank AGI!

Ninety minutes later, Patricia was standing at the first tee of her local golf course, along with three of her regular golfing buddies. As their health had been enhanced by wave after wave of rejuvenation therapies over the decades, their prowess at golf had improved as well. Patricia was hitting the ball further and straighter than ever. To keep the game interesting, the grass fairways would change their slopes and curves dynamically. It added to the challenge. And their exoskeletons had to be disabled for the duration of the game. At least, that was what the friends had agreed, but there were many other ways the sport could be played.

The only drawback to these golf gatherings was an occasional recollection of former playing partners who had, sadly, died of diseases over the years before new treatments had become available. Sometimes Patricia would also think of James, her beloved husband, who had died of an aggressive cancer in 2003. James had taught her how to play golf back in the 1970s. They had spent 48 years of married life together – thrilling to Bill Haley and the Comets, and then watching children and grandchildren grow up. But James had died long before the birth of Byron, or any of the other great grandchildren. How… unfair, Patricia thought to herself.

Patricia had actually been thinking of James quite a lot over the last few weeks. Byron had persuaded her to engage with an AGI agent that was collecting as much information as possible about James, by talking to everyone alive who still had memories of him. The agent had even roamed through her brain memories whilst she slept. Don’t worry, Great Grandma, Byron had reassured her. In case the AGI finds any ‘naughty’ memories in there, it will never tell anyone!

Then it was time for the concert to begin. Patricia would take part from her own living room, wearing a larger version of her mindphone, for a completely immersive experience. She realised that Byron was in that virtual world too, along with several other family members. They embraced and chatted. Then Byron said, quietly, There’s someone else who can join us, if you wish.

Patricia noticed, in the distance inside the virtual world, a silhouette that was strangely familiar, yet also somehow alien. She caught her breath suddenly. Oh no, she exclaimed. I think I know what’s happening, and I’m not sure I’m ready for this.

The newcomer remained a respectful distance away, and appeared to be standing in a shadow.

He’s not real, of course, Byron explained. He’s no more real than the performers here. After all, Bill Haley has been dead since 1981, and Glenn Miller since 1944. And Great Grandad James has been dead since-

Patricia was overcome with emotion – a mix of joy, fear, excitement, and even a little disgust. This is so strange, she thought.

Sensing a need for privacy, the other family members quietly retreated from the shared virtual reality. Patricia could make up her own mind whether to turn her back on the silhouette, or to call him forward. After so many years, what would she say first, to a replica of a man who had shared her life so completely all these years ago?

The silhouette quietly called Patricia’s name, in the way that only James could do. The long, long wait was over.

Footnote

This short story was submitted as part of my entry to the competition described here. For some more details of the world envisioned, this article has answers to 13 related questions.

The image at the top of this page includes a design by Pixabay member Gordon Johnson.

Timeline to 2045: questions answered

This is a follow-up to my previous post, containing more of the material that I submitted around five weeks ago to the FLI World Building competition. In this case, the requirement was to answer 13 questions, with answers limited to 250 words in each case.

Q1: AGI has existed for years, but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?

The Global AGI safety project was one of the most momentous and challenging in human history.

The centrepiece of that project was the set of “Singularity Principles” that had first appeared in print in the book Vital Foresight in 2021, and which were developed in additional publications in subsequent years – a set of recommendations with the declared goal of increasing the likelihood that oncoming disruptive technological changes would have outcomes that are profoundly positive for humanity, rather than deeply detrimental. The principles split into four sections:

  1. A focus, in advance, on the goals and outcomes that were being sought from particular technologies
  2. Analysis of the intrinsic characteristics that are desirable in technological solutions
  3. Analysis of methods to ensure that development takes place responsibly
  4. And a meta-analysis – principles about how this overall set of recommendations could itself evolve further over time, and principles for how to increase the likelihood that these recommendations would be applied in practice rather than simply being some kind of wishful thinking.

What drove increasing support for these principles was a growing awareness, shared around the world, of the risks of cataclysmic outcomes that could arise all too easily from increasingly powerful AI, even when everyone involved had good intentions. This shared sense of danger caused even profound ideological enemies to gather together on a regular basis to review joint progress toward fulfilment of the Singularity Principles, as well as to evolve and refine these Principles.

Q2: The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?

One of the key principles programmed into every advanced AI, from the late 2020s onward, was that no AI should seize or manipulate resources owned by any other AI. Instead, AIs should operate only with resources that have been explicitly provided to them. That prevented any hostile takeover of less capable AIs by more powerful competitors. Accordingly, a community of different AIs coexisted, with differing styles and capabilities.

However, in parallel, the various AIs naturally started to interact with each other, offering services to each other in response to expressions of need. The outcome of this interaction was a blurring of the boundaries between different AIs. Thus, by the 2040s, it was no longer meaningful to distinguish between what had originally been separate pieces of software. Instead of referring to “the Alphabet AGI” or “the Tencent AGI”, and so on, people just talked about “the AGI” or even “AGI”.

The resulting AGI was, however, put to different purposes in different parts of the world, dependent on the policies pursued by the local political leaders.

Q3: How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?

The 2020s were a decade of turbulence, in which a number of arms races proceeded at pace, and when conflict several times came close to spilling over from being latent and implied (“cold”) to being active (“hot”):

  • The great cyber war of 2024 between Iran and Israel
  • Turmoil inside many countries in 2026, associated with the fall from power of the president of Russia
  • Exchanges of small numbers of missiles between North and South Korea in 2027
  • An intense cyber battle in 2028 over the future of an independent Taiwan.

These conflicts resulted in a renewed “never again” global focus to avoid any future recurrences. A new generation of political leaders resolved that, regardless of their many differences, they would put particular kinds of weapons beyond use.

Key to this “never again” commitment was an agreement on “global AI monitoring” – the use of independent narrow AIs to monitor all developments and deployments of potential weapons of mass destruction. That agreement took inspiration from previous international agreements that instituted regular independent monitoring of chemical and biological weapons.

Initial public distrust of the associated global surveillance systems was overcome, in stages, by demonstrations of the inherently trustworthy nature of the software used in these systems – software that adapted various counterintuitive but profound cryptographic ideas from the blockchain discussions of the early and mid-2020s.

Q4: In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that, if at all?

Between 2024 and 2032, the US switched its politics from a troubled bipolar system, with Republicans and Democrats battling each other with intense hostility, into a multi-party system, with a dynamic fluidity of new electoral groupings. The winner of the 2032 election was, for the first time since the 1850s, from neither of the formerly dominant parties. What enabled this transition was the adoption, in stages, of ranked choice voting, in which electors could indicate a sequence of which candidates they preferred. This change enabled electors to express interest in new parties without fearing their votes would be “wasted” or would inadvertently allow the election of particularly detested candidates.

The EU led the way in adoption of a “house of AI” as a reviewing body for proposed legislation. Legislation proposed by human politicians was examined by AI, resulting in suggested amendments, along with detailed explanations from the AI of reasons for making these changes. The EU left the ultimate decisions – whether or not to accept the suggestions – in the hands of human politicians. Over time, AI judgements were accepted on more and more occasions, but never uncritically.

China remained apprehensive until the mid-2030s about adopting multi-party politics with full tolerance of dissenting opinions. This apprehension was rooted in historic distrust of the apparent anarchy and dysfunction of politicians who needed to win approval of seemingly fickle electors. However, as AI evidently improved the calibre of online public discussion, with its real-time fact-checking, the Chinese system embraced fuller democratic reforms.

Q5: Is the global distribution of wealth (as measured say by national or international Gini coefficients) more, or less, unequal than 2022’s, and by how much? How did it get that way?

The global distribution of wealth became more unequal during the 2020s before becoming less unequal during the 2030s.

Various factors contributed to inequality increasing:

  • “Winner takes all”: Companies offering second-best products were unable to survive in the marketplace. Swift flows of both information and goods meant that all customers knew about better products and could easily purchase them
  • Financial rewards from the successes of companies increasingly flowed to the owners of the capital deployed, rather than to the people supplying skills and services. That’s because more of the skills and services could be supplied by automation, driving down the salaries that could be claimed by people who were offering the same skills and services
  • The factors that made some products better than others increasingly involved technological platforms, such as the latest AI systems, that were owned by a very small number of companies
  • Companies were able to restructure themselves ingeniously in order to take advantage of tax loopholes and special deals offered by countries desperate for at least some tax revenue.

What caused these trends to reverse was, in short, better politics:

  • Smart collaboration between the national governments of the world, avoiding tax loopholes
  • Recognition by greater numbers of voters of the profound merits of greater redistribution of the fruits of the remarkable abundance of NBIC technologies, as the percentage of people in work declined, and as the problems were more fully recognised of parts of society being “left behind”.

Q6: What is a major problem that AI has solved in your world, and how did it do so?

AI made many key contributions toward the solution of climate change:

  • By enabling more realistic and complete models of all aspects of the climate, including potential tipping points ahead of major climate phase transitions
  • By improving the design of alternative energy sources, including ground-based geothermal, high-altitude winds, ocean-based waves, space-based solar, and several different types of nuclear energy
  • Very significantly, by accelerating designs of commercially meaningful nuclear fusion
  • By identifying the types of “negative emissions technologies” that had the potential to scale up quickly in effectiveness
  • By accelerating the adoption of improved “cultivated meat” as sources of food that had many advantages over methods of animal-based agriculture, namely, addressing issues with land use, water use, antibiotics use, and greenhouse gas emissions, and putting an end to the vile practice of the mass slaughter of sentient creatures
  • By assisting the design of new types of cement, glass, plastics, fertilisers, and other materials whose manufacture had previously caused large emissions of greenhouse gases
  • By recommending the sorts of marketing messages that were most effective in changing the minds of previous opponents of effective action.

To be clear, AI did this as part of “NBIC convergence”, in which there are mutual positive feedback loops between progress in each of nanotech, biotech, infotech, and cognotech.

Q7: What is a new social institution that has played an important role in the development of your world?

The G7 group of the democratic countries with the largest economies transitioned in 2023 into the D16, with a sharper commitment than before to championing the core values of democracy: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

The D16 was envisioned from the beginning as intended to grow in size, to become a global complement to the functioning of the United Nations, able to operate in circumstances that would have resulted in a veto at the UN from countries that paid only lip service to democracy.

One of the first projects of the D16 was to revise the Universal Declaration of Human Rights from the form initially approved by the United Nations General Assembly in 1948, to take account of the opportunities and threats from new technologies, including what are known as “transhuman rights”.

In parallel, another project reached agreement on how to measure an “Index of Human Flourishing”, that could replace the economic measure GDP (Gross Domestic Product) as the de-facto principal indication of wellbeing of societies.

The group formally became the D40 in 2030 and the D90 in 2034. By that time, the D90 was central to agreements to vigorously impose an updated version of the Singularity Principles. Any group anywhere in the world – inside or outside the D90 – that sought to work around these principles, was effectively shut down due to strict economic sanctions.

Q8: What is a new non-AI technology that has played an important role in the development of your world?

Numerous fields have been transformed by atomically precise manufacturing, involving synthetic nanoscale assembly factories. These had been envisioned in various ways by Richard Feynman in 1959 and Eric Drexler in 1986, but did not become commercially viable until the early 2030s.

It had long been recognised that an “existence proof” for nanotechnology was furnished by the operation of ribosomes inside biological cells, with their systematic assembly of proteins from genetic instructions. However, creation of comparable synthetic systems needed to wait for assistance in both design and initial assembly from increasingly sophisticated AI. (DeepMind’s AlphaFold software had given an early indication of these possibilities back in 2021.) Once the process had started, significant self-improvement loops soon accelerated, with each new generation of nanotechnology assisting in the creation of a subsequent better generation.

The benefits flowed both ways: nanotech precision allowed breakthroughs in the manufacture of new types of computer hardware, including quantum computers; these in turn supported better types of AI algorithms.

Nanotech had dramatic positive impact on practices in the production of food, accommodation, clothing, and all sorts of consumer goods. Three areas particularly deserve mention:

  • Precise medical interventions, to repair damage to biological systems
  • Systems to repair damage to the environment as a whole, via a mixture of recycling and regeneration, as well as “negative emissions technologies” operating in the atmosphere
  • Clean energy sources operating at ever larger scale, including atomic-powered batteries

Q9: What changes to the way countries govern the development and/or deployment and/or use of emerging technologies (including AI), if any, played an important role in the development of your world?

Effective governance of emerging technologies involved both voluntary cooperation and enforced cooperation.

Voluntary cooperation – a desire to avoid actions that could lead to terrible outcomes – depended in turn on:

  • An awareness of the risk pathways – similar to the way that Carl Sagan and his colleagues vividly brought to the attention of world leaders in the early 1980s the potential global catastrophe of “nuclear winter”
  • An understanding that the restrictions being accepted would not hinder the development of truly beneficial products
  • An appreciation that everyone was be compelled to observe the same restrictions, and couldn’t gain some short-sighted advantage by breaching the rules.

The enforcement elements depended on:

  • An AI-powered “trustable monitoring system” that was able to detect, through pervasive surveillance, any potential violations of the published restrictions
  • Strong international cooperation, by the D40 and others, to isolate and remove resources from any maverick elements, anywhere in the world, that failed to respect these restrictions.

Public acceptance of trustable monitoring accelerated once it was understood that the systems performing the surveillance could, indeed, be trusted; they would not confer any inappropriate advantage on any grouping able to access the data feeds.

The entire system was underpinned by a vibrant programme of research and education (part of a larger educational initiative known as the “Vital Syllabus”), that:

  • Kept updating the “Singularity Principles” system of restrictions and incentives in the light of improved understanding of the risks and solutions
  • Ensured that the importance of these principles was understood both widely and deeply.

Q10: Pick a sector of your choice (education, transport, energy, communication, finance, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed with AI in your world.

For most of human history, religion had played a pivotal role in shaping people’s outlooks and actions. Religion provided narratives about ultimate purposes. It sanctified social structures. It highlighted behaviour said to be exemplary, as demonstrated in the lives of key religious figures. And it deplored other behaviours said to lead to very bad consequences, if not in the present life, then in an assumed afterlife.

Nevertheless, the philosophical justifications for religions had come under increasing challenge in recent times, with the growth of appreciation of a scientific worldview (including evolution by natural selection), the insights from critical analysis of previously venerated scriptures, and a stark awareness of the tensions between different religions in a multi-polar world.

The decline of influence of religion had both good and bad consequences. Greater freedom of thought and action was accompanied by a shrinking of people’s mental horizons. Without the transcendent appeal of a religious worldview, people’s lives often became dominated instead by egotism or consumerism.

The growth of the transhumanist movement in the 2020s provided one counter to these drawbacks. It was not a religion in the strict sense, but its identification of solutions such as “the abolition of aging”, “paradise engineering”, and “technological resurrection” stirred deep inner personal transformations.

These transformations reached a new level thanks to AGI-facilitated encounters with religious founders, inside immersive virtual reality simulations. New hallucinogenic substances provided extra richness to these experiences. The sector formerly known as “religion” therefore experienced an unexpected renewal. Thank AGI!

Q11: What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2022?

In response to the question, “How much longer do you expect to live”, the usual answer is “at least another hundred years”.

This answer reflects a deep love of life: people are glad to be alive and have huge numbers of quests, passions, projects, and personal voyages that they are enjoying or to which they’re looking forward. The answer also reflects the extraordinary observation that, these days, very few people die. That’s true in all sectors of society, and in all countries of the world. Low-cost high-quality medical treatments are widely available, to reverse diseases that were formerly fatal, and to repair biological damage that had accumulated earlier in people’s lives. People not only live longer but become more youthful.

The core ideas behind these treatments had been clear since the mid-2020s. Biological metabolism generates as a by-product of its normal operation an assortment of damage at the cellular and intercellular levels of the body. Biology also contains mechanisms for the repair of such damage, but over time, these repair mechanisms themselves lose vitality. As a result, people manifest various so-called “hallmarks of aging”. However, various interventions involving biotech and nanotech can revitalise these repair mechanisms. Moreover, other interventions can replace entire biological systems, such as organs, with bio-synthetic alternatives that actually work better than the originals.

Such treatments were feared and even resisted for a while, by activists such as the “naturality advocates”, but the evident improvements these treatments enabled soon won over the doubters.

Q12: In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

In a second country of your choice, which rights are better and which rights are worse respected in your world than in 2022, and why/how?

Regarding the famous phrase, “Everyone has the right to life, liberty and security of person”, all three of these fundamental rights are upheld much more fully, around the world, in 2045 than in 2022:

  • “Life” no longer tends to stop around the age of seventy or eighty; even people aged well over one hundred look forward to continuing to enjoy the right to life
  • “Liberty” involves more choices about lifestyles, personal philosophy, morphological freedom (augmentation and variation of the physical body) and sociological freedom (new structures for families, social groupings, and self-determined nations); importantly, these are not just “choices in theory” but are “choices in practice”, since means are available to support these modifications
  • “Security” involves greater protection from hazards such as extreme weather, pandemics, criminal enterprises, infrastructure hacking, and military attacks.

These improvements in the observation of rights are enabled by technologies of abundance, operated within a much-improved political framework.

Obtaining these benefits involved people agreeing to give up various possible actions that would have led to fewer freedoms and rights overall:

  • “Rights” to pollute the environment or to inflict other negative externalities
  • “Rights” to restrict the education of their girl children
  • “Rights” to experiment with technology without a full safety analysis being concluded.

For a while, some countries like China provided their citizens with only a sham democracy, fearing an irresponsible exercise of that freedom. But by the mid-2030s, that fear had dissipated, and people in all countries gained fuller participatory rights in governance and lifestyle decisions.

Q13: What’s been a notable trend in the way that people are finding fulfilment?

For most of history, right up to the late 2020s, many people viewed themselves through the prism of their occupation or career. “I’m a usability designer”, they might have said. Or “I’m a data scientist” or “I’m a tour guide”, and so on. Their assessment of their own value was closely linked to the financial rewards they obtained from being an employee.

However, as AI became more capable of undertaking all aspects of what had previously been people’s jobs – including portions involving not only diligence and dexterity but also creativity and compassion – there was a significant decline in the proportion of overall human effort invested in employment. By the late 2030s, most people had stopped looking for paid employment, and were content to receive “universal citizens’ dividend” benefits from the operation of sophisticated automated production facilities.

Instead, more and more people found fulfilment by pursuing any of an increasing number of quests and passions. These included both solitary and collaborative explorations in music, art, mathematics, literature, and sport, as well as voyages in parts of the real world and in myriads of fascinating shared online worlds. In all these projects, people found fulfilment, not by performing better than an AI (which would be impossible), but by improving on their own previous achievements, or in friendly competition with acquaintances.

Careful prompting by the AGI helps to maintain people’s interest levels and a sense of ongoing challenge and achievement. AGI has proven to be a wonderful coach.

A year-by-year timeline to 2045

The ground rules for the worldbuilding competition were attractive:

  • The year is 2045.
  • AGI has existed for at least 5 years.
  • Technology is advancing rapidly and AI is transforming the world sector by sector.
  • The US, EU and China have managed a steady, if uneasy, power equilibrium.
  • India, Africa and South America are quickly on the ride as major players.
  • Despite ongoing challenges, there have been no major wars or other global catastrophes.
  • The world is not dystopian and the future is looking bright.

Entrants were asked to submit four pieces of work. One was a new media piece. I submitted this video:

Another required piece was:

timeline with entries for each year between 2022 and 2045 giving at least two events (e.g. “X invented”) and one data point (e.g. “GDP rises by 25%”) for each year.

The timeline I created dovetailed with the framework from the above video. Since I enjoyed creating it, I’m sharing my submission here, in the hope that it may inspire readers.

(Note: the content was submitted on 11th April 2022.)

2022

US mid-term elections result in log-jammed US governance, widespread frustration, and a groundswell desire for more constructive approaches to politics.

The collapse of a major crypto “stablecoin” results in much wider adverse repercussions than was generally expected, and a new social appreciation of the dangers of flawed financial systems.

Data point: Number of people killed in violent incidents (including homicides and armed conflicts) around the world: 590,000

2023

Fake news that is spread by social media driven by a new variant of AI provokes riots in which more than 10,000 people die, leading to much greater interest a set of “Singularity Principles” that had previously been proposed to steer the development of potentially world-transforming technologies.

G7 transforms into the D16, consisting of the world’s 16 leading democracies, proclaiming a profound shared commitment to champion norms of: openness; free and fair elections; the rule of law; independent media, judiciary, and academia; power being distributed rather than concentrated; and respect for autonomous decisions of groups of people.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 6.4%

2024

South Korea starts a trial of a nationwide UBI scheme, in the first of what will become in later years a long line of increasingly robust “universal citizens’ dividends” schemes around the world.

A previously unknown offshoot of ISIS releases a bioengineered virus. Fortunately, vaccines are quickly developed and deployed against it. In parallel, a bitter cyber war takes place between Iran and Israel. These incidents lead to international commitments to prevent future recurrences.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 38%

2025

Extreme weather – floods and storms – kills 10s of 1000s in both North America and Europe. A major trial of geo-engineering is rushed through, with reflection of solar radiation in the stratosphere – causing global political disagreement and then a renewed determination for tangible shared action on climate change.

The US President appoints a Secretary for the Future as a top-level cabinet position. More US states adopt rank choice voting, allowing third parties to grow in prominence.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 38%

2026

A song created entirely by an AI tops the hit parade, and initiates a radical new musical genre.

Groundswell opposition to autocratic rule in Russia leads to the fall from power of the president and a new dedication to democracy throughout countries formerly perceived as being within Russia’s sphere of direct influence.

Data point: Net greenhouse gas emissions (including those from land-use changes): 59 billion tons of CO2 equivalent – an unwelcome record.

2027

Metformin approved for use as an anti-aging medicine in a D16 country. Another D16 country recommends nationwide regular usage of a new nootropic drug.

Exchanges of small numbers of missiles between North and South Korea leads to regime change inside North Korea and a rapprochement between the long-bitter enemies.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 9.2%

2028

An innovative nuclear fusion system, with its design assisted by AI, runs for more than one hour and generates significantly more energy out than what had been put in.

As a result of disagreements about the future of an independent Taiwan, an intense destructive cyber battle takes place. At the end, the nations of the world commit more seriously than before to avoiding any future cyber battles.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 41%

2029

A trial of an anti-aging intervention in middle-aged dogs is confirmed to have increased remaining life expectancy by 25% without causing any adverse side effects. Public interest in similar interventions in humans skyrockets.

The UK rejoins a reconfigured EU, as an indication of support for sovereignty that is pooled rather than narrow.

Data point: Proportion of world population with formal cryonics arrangements: 1 in 100,000

2030

Russia is admitted into the D40 – a newly expanded version of the D16. The D40 officially adopts “Index of Human Flourishing” as more important metric than GDP, and agrees a revised version of the Universal Declaration of Human Rights, brought up to date with transhuman issues.

First permanent implant in a human of an artificial heart with a new design that draws all required power from the biology of the body rather than any attached battery, and whose pace of operation is under the control of the brain.

Data point: Net greenhouse gas emissions (including those from land-use changes): 47 billion tons of CO2 equivalent – a significant improvement

2031

An AI discovers and explains a profound new way of looking at mathematics, DeepMath, leading in turn to dramatically successful new theories of fundamental physics.

Widespread use of dynamically re-programmed nanobots to treat medical conditions that would previously have been fatal.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 23%

2032

First person reaches the age of 125. Her birthday celebrations are briefly disrupted by a small group of self-described “naturality advocates” who chant “120 is enough for anyone”, but that group has little public support.

D40 countries put in place a widespread “trustable monitoring system” to cut down on existential risks (such as spread of WMDs) whilst maintaining citizens’ trust.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 35.7% 

2033

For the first time since the 1850s, the US President comes from a party other than Republican and Democratic.

An AI system is able to convincingly pass the Turing test, impressing even the previous staunchest critics with its apparent grasp of general knowledge and common sense. The answers it gives to questions of moral dilemmas also impress previous sceptics.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 58%

2034

The D90 (expanded from the D40) agrees to vigorously impose Singularity Principles rules to avoid inadvertent creation of dangerous AGI.

Atomically precise synthetic nanoscale assembly factories have come of age, in line with the decades-old vision of nanotechnology visionary Eric Drexler, and are proving to have just as consequential an impact on human society as AI.

Data point: Net greenhouse gas *removals*: 10 billion tons of CO2 equivalent – a dramatic improvement

2035

A novel written entirely by an AI reaches the top of the New York Times bestseller list, and is widely celebrated as being the finest piece of literature ever produced.

Successful measures to remove greenhouse gases from the atmosphere, coupled with wide deployment of clean energy sources, lead to a declaration of “victory over runaway climate change”.

Data point: Proportion of earth’s habitable land used to rear animals for human food: 4%

2036

A film created entirely by an AI, without any real human actors, wins Oscar awards.

The last major sceptical holdout, a philosophy professor from an Ivy League university, accepts that AGI now exists. The pope gives his blessing too.

Data point: Proportion of world population with cryonics arrangements: 24%

2037

The last instances of the industrial scale slaughter of animals for human consumption, on account of the worldwide adoption of cultivated (lab-grown) meat.

AGI convincingly explains that it is not sentient, and that it has a very different fundamental structure from that of biological consciousness.

Data point: Proportion of world population who are literate: 99.3%

2038

Rejuvenation therapies are in wide use around the world. “Eighty is the new fifty”. First person reaches the age of 130.

Improvements made by AGI upon itself effectively raise its IQ one hundred fold, taking it far beyond the comprehension of human observers. However, the AGI provides explanatory educational material that allows people to understand vast new sets of ideas.

Data point: Proportion of world population who consider themselves opposed to AGI: 0.1%

2039

An extensive set of “vital training” sessions has been established by the AGI, with all citizens over the age of ten participating for a minimum of seven hours per day on 72 days each year, to ensure that humans develop and maintain key survival skills.

Menopause reversal is common place. Women who had long ago given up any ideas of bearing another child happily embrace motherhood again.

Data point: Proportion of world population regularly taking powerful anti-aging medications: 99.2%

2040

The use of “mind phones” is widespread: new brain-computer interfaces that allow communication between people by mental thought alone.

People regularly opt to have several of their original biological organs replaced by synthetic alternatives that are more efficient, more durable, and more reliable.

Data point: Proportion of people of working age in US who are not working and who are not looking for a job: 96%

2041

Shared immersive virtual reality experiences include hyper-realistic simulations of long-dead individuals – including musicians, politicians, royalty, saints, and founders of religions.

The number of miles of journey undertaken by small “flying cars” exceeds that of ground-based powered transport.

Data point: Proportion of world population living in countries that are “full democracies” as assessed by the Economist: 100.0%

2042

First successful revival of mammal from cryopreservation.

AGI presents a proof of the possibility of time travel, but the resources required for safe transit of humans through time would require the equivalent of building a Dyson sphere around the sun.

Data point: Proportion of world population experiencing mental illness or dissatisfied with the quality of their mental health: 0.4%

2043

First person reaches the age of 135, and declares herself to be healthier than at any time in the preceding four decades.

As a result of virtual reality encounters of avatars of founders of religion, a number of new systems of philosophical and mystical thinking grow in popularity.

Data point: Proportion of world’s energy provided by earth-based nuclear fusion: 75%

2044

First human baby born from an ectogenetic pregnancy.

Family holidays on the Moon are an increasingly common occurrence.

Data point: Average amount of their waking time that people spend in a metaverse: 38%

2045

First revival of human from cryopreservation – someone who had been cryopreserved ten years previously.

Subtle messages decoded by AGI from far distant stars in the galaxy confirm that other intelligent civilisations exist, and are on their way to reveal themselves to humanity.

Data point: Number of people killed in violent incidents around the world: 59

Postscript

My thanks go to the competition organisers, the Future of Life Institute, for providing the inspiration for the creation of the above timeline.

Readers are likely to have questions in their minds as they browse the timeline above. More details of the reasoning behind the scenarios involved are contained in three follow-up posts:

22 February 2022

Nine technoprogressive proposals

Filed under: Events, futurist, vision — Tags: , , — David Wood @ 11:30 pm

Ahead of time, I wasn’t sure the format was going to work.

It seemed to be an ambitious agenda. Twenty-five speakers were signed up to deliver short presentations. Each had agreed to limit their remarks to just four minutes. The occasion was an International Technoprogressive Conference that took place earlier today (22nd February), with themes including:

  • “To be human, today and tomorrow”
  • “Converging visions from many horizons”.
Image credit: this graphic includes work by Pixabay user Sasin Tipchai

Each speaker had responded to a call to cover in their remarks either or both of the following:

  • Provide a brief summary of transhumanist-related activity in which they are involved
  • Make a proposal about “a concrete idea that could inspire positive and future-oriented people or organisations”.

Their proposals could address, for example, AI, enhancing human nature, equity and justice, accelerating science, existential risks, the Singularity, social and political angles, the governance of technology, superlongevity, superhappiness, or sustainable superabundance.

The speakers who provided concrete proposals were asked, ahead of the conference, to write down their proposal in 200 words or less, for distribution in a document to be shared among all attendees.

Attendees at the event – speakers and non-speakers alike – were asked to provide feedback on the proposals that had been presented, and to cast up to five votes among the different proposals.

I wondered whether we were trying to do too much, especially given the short amount of time spent in preparing for the event.

Happily, it all went pretty smoothly. A few speakers recorded videos of their remarks in advance, to be sure to keep to the allotted timespan. A small number of others were in the end unable to take part on the day, on account of last-minute schedule conflicts.

As for the presentations themselves, they were diverse – exactly as had been hoped by the organisers ( l’Association Françoise Transhumanistes (Technoprog), with some support from London Futurists).

For example, I found it particularly interesting to hear about perspectives on transhumanism from Cameroon and Japan.

Reflecting the quality of all the presentations, audience votes were spread widely. Comments made by voters again and again stressed the difficulty in each picking just five proposals to be prioritised. Nevertheless, audience members accepted the challenge. Some people gave one vote each to five different proposals. Others split them 2, 2, and 1, or in other combinations. One person gave all their five votes to a single proposal.

As for the outcome of the voting: I’m appending the text of the nine proposals that received the most votes. You’ll notice a number of common ideas, along with significant variety.

I’m presenting these nine proposals in alphabetical order of the first name of the proposers. I hope you find them interesting. If you find yourself inspired by what you read, please don’t hesitate to offer your own support to the projects described.

PS Big thanks are due to everyone who made this conference possible, especially the co-organisers, Didier Coeurnelle and Marc Roux.

Longevity: Opportunities and Challenges

Proposed by Anastasiia Velikanova, project coordinator at Open Longevity

Why haven’t we achieved significant progress in the longevity field yet? Although about 17,000 biological articles with the word “aging” in the title are published yearly, we do not have any therapy that reliably prolongs life.

One reason is that there are no large-scale projects in the biology of aging, such as the Human Genome or the  Large Hadron Collider. All research is conducted separately in academic institutions or startups and is mostly closed. With a great idea at the start, a company hides its investigations, but the capabilities of its team are not enough to globally change the situation with aging.

Another reason is that the problem of aging is highly interdisciplinary. We need advanced mathematical models and AI algorithms to accumulate all research about molecular processes and identify critical genes or targets.

Most importantly, we, transhumanists, should unite and create an infrastructure that would allow solving the problem of aging on a large scale, attracting the best specialists from different fields. 

An essential part of such an infrastructure is open databases. For example, our organization created Open Genes – the database of genes associated with aging, allowing the selection of combinatorial therapy against aging.

Vital Syllabus

Proposed by David Wood, Chair at London Futurists

Nearly every serious discussion about improving the future comes round to the need to improve education. In our age of multiple pressures, dizzying opportunities, daunting risks, and accelerating disruption, people in all walks of life need better access to information about the skills that are most important and the principles that matter most. Traditional education falls far short on these counts.

The Vital Syllabus project aims to collect and curate resources to assist students of all ages to acquire and deepen these skills, and to understand and embody the associated principles. To be included in the project, these resources must be free of charge, clear, engaging, and trustworthy – and to align with a transhumanist understanding.

A framework is already in place: 24 top-level syllabus areas, nearly 200 subareas, and an initial set of example videos. Please join this project to help fill out the syllabus quickly!

For information about how to help this project, see this FAQ page.

Longevity Art

Proposed by Elena Milova, Founder at LongevityArt

When we are discussing life extension, people most often refer to movies, animations, books, paintings, and other works of art. They find there the concepts and the role models that they can either follow or reject. Art has the potential to seed the ideas in one’s mind that can then gradually grow and mature until they become part of the personal life philosophy. Also, since one function of art is to uncover, question, mock and challenge the status quo, art is one of the most appropriate medias for spreading new ideas such as one of radical life extension.

I suggest that the community supports more art projects (movies, animations, books, paintings, digital artworks) by establishing foundations sponsoring the most valuable art projects.

Use longevity parties to do advocacy for more anti-aging research

Proposed by Felix Werth, Leader at Partei für Gesundheitsforschung

With the repair-approach we already know in principle, how to defeat aging. To increase our chance of being alive and healthy in 100 years significantly, much more resources have to be put into the implementation of the repair-approach. An efficient way to achieve this is to form single issue longevity parties and run in elections. There are many people who would like to live longer, but for some reason don’t do anything for it. Running in elections can be very efficient advocacy and gives the people the option to very easily support longevity research with their vote. If the governing parties see that they can get more votes with this issue, they will probably care about it more.

In 2015 I initiated a longevity party in Germany and since then, we have participated in 14 elections already and did a lot of advocacy, all this with very few active members and very few resources. With a little more resources, much more advocacy could be done this way. I suggest that more people, who want radical life extension in their lifetime, form longevity parties in their country and run in elections. Growing the longevity movement faster is key to success.

Revive LEV: The Game on Life Extension

Proposed by Gennady Stolyarov, Chair at U.S. Transhumanist Party

I propose to resurrect a computer game on longevity escape velocity, LEV: The Game, which was previously attempted in 2014 and for which a working Alpha version had been created but had unfortunately been lost since that time.

In this game one plays the role of a character who, through various lifestyle choices and pursuit of rejuvenation treatments, strives to live to age 200. The U.S. Transhumanist Party has obtained the rights to continue game development as well as the previously developed graphical assets. The logic of the game has been redesigned to be turn-based; all that remains is to recruit the programming talent needed to implement the logic of the game into code. A game on longevity escape velocity can draw in a much larger audience to take interest in the life-extension movement and also illustrate how LEV will likely actually arrive – dispelling common misunderstandings and enabling more people to readily understand the transition to indefinite lifespans.

Implement optimization and planning for your organization

Proposed by Ilia Stambler, Chair at Israeli Longevity Alliance

Often progressive, transhumanist and/or life-extensionist groups and associations are inefficient as organizations – they lack a clear and agreed vision, concrete goals and plans for the organization’s advancement, a clear estimate of the available as well desirable human and material resources necessary to achieve those goals and plans, do not track progress, performance and achievements toward the implementation of those goals. As a result, many groups are acting rather as discussion clubs at best, instead of active and productive organizations, drifting aimlessly along occasional activities, and so they can hardly be expected to bring about significant directional positive changes for the future.

Hence the general suggestion is to build up one’s own organizations through organizational optimization, to plan concretely, not so much in terms of what the organization “should do”, but rather what its specific members actually can and plan to do in the shorter and longer term. I believe, through increasing the planning efficiency and the organizational optimization for the existing and emerging organizations, a much stronger impact can be made. (The suggestion is general, but particular organizations may see whether it may apply to them and act according to their particular circumstances.)

Campaign for the Longevity Dividend

Proposed by James Hughes, Executive Director at the IEET

The most popular goal of the technoprogressive and futurist community is universal access to safe and effective longevity therapies. There are three things our community can do to advance this agenda:

  1. First, we need to engage with demographic, medical and policy issues that surround longevity therapies, from the old-age dependency ratio and pension crisis to biomarkers of aging and defining aging as a disease process.
  2. Second, we need to directly argue for public financing of research, a rational clinical trial pathway, and access to these therapies through public health insurance.
  3. Third, we need to identify the existing organizations with similar or related goals, and establish coalitions with them to work for the necessary legislation.

These projects can build on existing efforts, such as International Longevity Alliance, Ending Aging Media Response and the Global Healthspan Policy Institute.

Prioritise moral enhancement

Proposed by Marc Roux, Chair at the French Transhumanist Association (AFT-Technoprog)

As our efforts to attract funding and researchers to longevity have begun to bear fruit, we need to popularise much more moral enhancement.

Ageing is not defeated. However, longevity has already found powerful relays in the decision-making spheres. Mentalities are slowly changing, but the battle for longevity is underway.

Our vanguard can begin to turn to other great goal.

Longevity will not be enough to improve the level of happiness and harmony of our societies. History has shown that it doesn’t change the predisposition of humans to dominance, xenophobia, aggressiveness … They remain stuck in their prehistoric gangue, which condemns them to repeat the same mistakes. If we don’t allow humans to change these behavioural predeterminations, nothing essential will change.

We must prioritise cognitive sciences, and ensure that this is done in the direction of greater choice for everyone, access for all to an improvement in their mental condition, and an orientation towards greater solidarity.

And we’ll work to prevent cognitive sciences from continuing to be put at the service of liberticidal control and domination logics.

On this condition, moral enhancement can be an unprecedented good in the history of humanity.

Transhumanist Studies: Knowledge Accelerator

Proposed by Natasha Vita-More, Executive Director at Humanity+

An education is a crucial asset. Providing lifelong learning that is immediate, accessible and continually updating is key. Transhumanist Studies is an education platform designed to expand knowledge about how the world is transforming. Its Knowledge Accelerator curricula examines the field of longevity, facts on aging and advances in AI, nanomedicine and cryonics, critical and creative thinking, relationships between humanity and ecosystems of earth and space, ethics of fairness, and applied foresight concerning opportunities and risks on the horizon.

Our methodology is applied foresight with a learning model that offers three methods in its 50-25-25 curricula:

  1. 50% immersive learning environment (lectures, presentations, and resources);
  2. 25% project-based iterative study; and
  3. 25% open-form discussion and debate (aligned with a Weekly Studies Group and monthly H+ Academy Roundtable).

In its initiative to advance transhumanism, the Knowledge Accelerator supports the benefits of secular values and impartiality. With a team located across continents, the program is free for some and at a low cost for others. As the scope of transhumanism  continues to grow, the culture is as extraordinary as its advocacy, integrity, and long-term vision.

Homepage | Transhumanist Studies (teachable.com) (I spoke on the need for education at TransVision 2021.)

7 February 2022

Options for controlling artificial superintelligence

What are the best options for controlling artificial superintelligence?

Should we confine it in some kind of box (or simulation), to prevent it from roaming freely over the Internet?

Should we hard-wire into its programming a deep respect for humanity?

Should we avoid it from having any sense of agency or ambition?

Should we ensure that, before it takes any action, it always double-checks its plans with human overseers?

Should we create dedicated “narrow” intelligence monitoring systems, to keep a vigilant eye on it?

Should we build in a self-destruct mechanism, just in case it stops responding to human requests?

Should we insist that it shares its greater intelligence with its human overseers (in effect turning them into cyborgs), to avoid humanity being left behind?

More drastically, should we simply prevent any such systems from coming into existence, by forbidding any research that could lead to artificial superintelligence?

Alternatively, should we give up on any attempt at control, and trust that the superintelligence will be thoughtful enough to always “do the right thing”?

Or is there a better solution?

If you have clear views on this question, I’d like to hear from you.

I’m looking for speakers for a forthcoming London Futurists online webinar dedicated to this topic.

I envision three speakers each taking up to 15 minutes to set out their proposals. Once all the proposals are on the table, the real discussion will begin – with the speakers interacting with each other, and responding to questions raised by the live audience.

The date for this event remains to be determined. I will find a date that is suitable for the speakers who have the most interesting ideas to present.

As I said, please get in touch if you have questions or suggestions about this event.

Image credit: the above graphic includes work by Pixabay user Geralt.

PS For some background, here’s a video recording of the London Futurists event from last Saturday, in which Roman Yampolskiy gave several reasons why control of artificial superintelligence will be deeply difficult.

For other useful background material, see the videos on the Singularity page of the Vital Syllabus project.

13 January 2022

Measuring – and forecasting – the health of the planet

An invitation to join a Millennium Project Delphi Study on the State of the Future

Assessments of individual health have been on my mind a lot recently, as I’ve observed my own physical body display less resilience in the face of stress than was the case when I was younger. (See my previous two blog posts, here and here, for the gory details.)

But alongside questions about the health of individuals, a larger set of questions loom. How is the health of global society as a whole? Are we headed toward major reversals, which could knock us collectively off course, akin to how diseases such as Covid-19 have intruded, often horribly, on individual lives?

Indeed, in any such assessment of the overall health of global society, what should we be measuring? Which factors are “symptoms” and which are closer to being “root causes”?

The Millennium Project has been addressing that subject on a regular basis since its formation in 1996. It regularly publishes updates on what it calls “The 15 global challenges” and, in a wider survey, “The State of the Future”.

What distinguishes the Millennium Project analysis from various other broadly similar enquiries is the “Delphi” method it uses to reach its conclusions. This involves an iterative online interaction between members of an extended community, who are asked their opinions on a number of questions, with the option for participants to revise their opinions if they read input from other respondents that brings new considerations to their mind.

The reason I’m mentioning this now is that a new Delphi survey is now starting, and there’s scope for a number of my acquaintances to take part. (Dear Reader: That includes you.)

This survey is being structured differently from previous years, and is using a new tool. Participants will be asked to offer estimates on 29 metrics for the year 2030 – including the best and worst potential value the indicator might have in 2030. You’ll also be asked which of the metrics are the most important (and which are the least).

To help you provide answers, the system already contains data points stretching several decades into the past.

The metrics include:

  • Income inequality (income share held by highest 10%)
  • Unemployment (% of total labour force)
  • Life expectancy at birth (years)
  • Physicians (per 1,000 people)
  • Literacy rate, adult total (% of people ages 15 and above)
  • People using safely managed drinking water services (% of population)
  • CO2-equivalent concentration in the atmosphere (ppm)
  • Energy efficiency (GDP per unit of energy use)
  • Electricity production from renewable sources (% of total)
  • Individuals using the Internet (% of population)
  • Proportion of seats held by women in national parliaments (% of members)
  • Number of conflicts between different states
  • Refugee population

You won’t have to answer all the questions. Instead, you can direct your attention to the questions where you feel you have some particular insight. You can browse the other questions at a later time. And, as mentioned, you can revisit some of your earlier answers once you see comments made by other participants. Indeed, it is in the interaction between different comments where the greatest insight is likely to arise.

If you think you’d like to take part, please get in touch with me. Note that the Millennium Project will give priority to people with the following roles: professional futurists, scientists (including social scientists as well as natural scientists), policymakers, science and technology experts, advisors to government or business, members of NGOs, UN liaison, and professional consultants.

The Delphi questionnaire will remain open until 31 January, 2022. The findings of the questionnaire will feature in a London Futurists event later in the year.

18 December 2021

My encounter with a four-armed robot

Filed under: aging, healthcare, robots — Tags: , , — David Wood @ 8:45 pm

I didn’t actually see the robot. My mind had already been switched off, by anaesthetists, ahead of my bed being wheeled into the operating theatre. It was probably just as well.

Image source: HCA Healthcare

Later, when my mind had restarted, and I was lying in recovery in my hospital ward, I checked. Yes, there were six small plasters on my abdomen, covering six small wounds (“ports”), that the urology surgeon had told me he would create in order for the da Vinci robot to work its magic.

The point of the operation was to remove the central core of my prostate – an organ that sits toward the back of the body and which is difficult to access.

The prostate wraps around the urethra – the channel through which urine flows from the bladder into the penis. The typical size of a prostate for a man aged twenty is around 20 ml. By age sixty this may have doubled. The larger the prostate, the greater the chance of interference with normal urine flow. In my own case, I had experienced various episodes over the last ten years when urination was intermittently difficult, but matters always seemed to right themselves after a few days. Then at the beginning of September, I found I couldn’t pass any urine. What made matters more complicated was that I was away from home at the time, on a short golfing holiday in Wiltshire. The golf was unusually good, but my jammed up bladder felt awful.

Following an anxious call to the NHS 111 service, I was admitted to the Royal United Hospital in Bath where, after a couple of false starts, an indwelling catheter was inserted through my urethra. Urine gushed out. I felt relieved as never before.

In a way, that was the easy bit. The harder question was what long-term approach to take.

A six-week trial of a muscle-relaxant drug called Tamsulosin had no impact on my ability to pass urine unaided. Measuring the size of my prostate via a transrectal ultrasound procedure clarified options: it was a whopping 121 ml.

The radiologist said “This is not the largest prostate I have ever seen”, but it was clear my condition was well outside the usual range. Not only would changes in medication or diet be very unlikely to produce a long-term solution for me. But most of the more standard prostate operations (there are a large family of possibilities, as I discovered) would not be suitable for a prostate as large as mine. The risks of adverse side-effects would be too large, as well as recurrence of prostate pressure in the years to come.

That led my consultant to recommend what is called a robotic-assisted simple prostatectomy. The “simple” is in contrast to the “radical” option often recommended for men suffering from prostate cancer. In a simple prostatectomy, the outer part of the prostate remains in place, along with nerve and other connections.

Over several hours, whilst my mind was deanimated, the robotic arms responded to the commands issued by the human surgeon. Some of the ports were used to introduce gas (CO2) into my abdomen, to inflate it, creating room for the robotic arms to move. Some ports supported illumination and cameras. And the others channelled various cutting and reconstruction tools. By the end, some 85% of my prostate had been removed.

It might sound cool, for a technology advocate like myself to receive an operation from a high-precision robot. But in reality, it was still a miserable experience, despite the high-calibre professional support from medical staff. The CO2 left parts of my body unexpectedly swollen and painful. And as time passed, other swellings known as oedemas emerged – apparently due to fluid.

I learned the hard way that I needed to take things slow and gentle as I recovered. In retrospect, it was a mistake for me to walk too far too soon, and to take part in lengthy Zoom calls. My sleep suffered as a result, with shivering, sweating, coughing fits, and even one black-out when I went to the bathroom and felt myself about to pass out. I had the presence of mind to lower my head quickly before the lights went out altogether. I came to my senses a few moments later, with my upper torso sprawled in the bath, and my lower body hanging over the edge. Thank goodness no serious damage ensued from that mini collapse. The only good outcome that night was when I took a Covid test (because of the coughing) and it came out negative.

Ten days later, things are closer to normal again. It’s wonderful that my internal plumbing works smoothly again, under my control. But I’m still being cautious about how much I take on at any time.

(If you’re waiting for me to reply to various emails, I’ll get round to them eventually…)

More good news: tests on the material removed from my body have confirmed that the growth was “benign” rather than cancerous. My wounds are healing quickly, and I am almost weaned off painkillers.

I have no regrets about choosing this particular surgical option. It was a good decision. Hopefully I’ll be playing golf again some time in January. I am already strolling down some of the fairways at Burhill Golf Club, carrying a single club in my hand – a putter. I drop a golf ball when I reach the green. Sometimes I knock the ball in the hole in two putts, or even just one. And sometimes it’s three putts, or even more. But the fresh air and gentle exercise is wonderful, regardless of the number of putts.

The bigger lesson for me is a message I often include in my presentations: prevention is better than cure. A stitch in time saves nine.

Earlier attention to my enlarging prostate – either by a change of diet, or by taking medicines regularly – may well have avoided all the unpleasantness and cost of the last few months.

As for the prostate, so also for many other parts of the body.

This year, I’ve been thinking more and more about the good health of the mind and the brain. With my reduced mobility over the last few months, I’ve had time to catch up with some reading about brain rewiring, mental agility and reprogramming, the role and content of consciousness, and ways in which people have recovered from Alzheimer’s.

Once again, the message is that prevention is better than cure.

If you’re interested in any of these topics, here’s an image of some books I have particularly enjoyed.

Older Posts »

Blog at WordPress.com.