dw2

3 September 2021

Aging, slowing down, becoming a cyborg

Here’s a personal note. I’ve had to change quite a few of my plans, due to an unexpected medical issue.

(It’s nothing to do with Covid. The details are below, for readers with a stomach for indelicate topics.)

That issue completely disrupted my activities yesterday and the day before, and it is likely to cause further disruptions in the weeks and months ahead – depending on how my body responds to various treatments.

In any case, I’m going to have to slow down a bit. I may need to cancel some of my provisional travel plans, and spend less time in front of screens and keyboards.

Please accept my apologies in advance if you’re waiting to hear from me about something, and I seem to be unduly slow in responding.

I said my medical issue was “unexpected”, but that’s not the whole story.

I’ve known for some time that potential danger was building up in my body.

It’s an aspect of aging. Our bodies perform remarkably well while we’re in our youth, but over time, various sorts of damage and dysfunction start to build up.

In early years, that damage doesn’t matter much. The body is healthy enough to carry out repairs, and to produce workarounds to compensate for the decline in performance.

Eventually, however, the dysfunction becomes too severe, and results in greater amounts of harm, disease, frailty, and (in due course) death.

That’s why, for example, human mortality (along with the mortality of many other species) accelerates exponentially over time.

If you analyse the data from the UK’s National Life Tables for how many people at any particular age, you’ll find the following:

  • A ten year old has only one chance in around 10,000 of dying before their next birthday
  • A 35 year old has one chance in around 1,000 of dying before their next birthday
  • A 60 year old has one chance in around 100 of dying before their next birthday
  • An 85 year old has one chance in around 10 of dying before their next birthday.

(I did that particular analysis a few years ago. An analysis of the most recent life tables data may show slight differences.)

You’ll spot the pattern.

The pattern isn’t exact. (Otherwise no 110 year old would ever reach the age of 111. Which is what an extrapolation of the previous figures would suggest.)

But it holds to a first approximation. It was first stated in 1825 by London-based actuary and mathematician Benjamin Gompertz, and is sometimes expressed as follows: After the age of around 35, human mortality doubles every eight years.

And it’s plausible that what underlies this observed trend is a gradual increase in damage throughout the biological structures of the body – including damage in those aspects of our biology responsible for repair and regeneration.

That’s the general pattern. One specific example involves the prostate organ. Over time, in some men, the prostate grows and grows, to the extent that it constricts the urethra which passes through it. That constriction slows the flow of urine from the bladder to the outside world.

(As I said, this is an indelicate subject. But it can in some cases become a matter of life and death.)

And that’s what has happened to me.

I’ve known for some time that my prostate had grown large, and was interfering with my “plumbing”.

I now regret that I didn’t pay more attention to that growing risk. I was too easily reassured by observing that the problem seemed to wax and wane. I remember hearing that, for many people, the issue remains tolerable throughout their life. Indeed, the NHS webpage on the topic starts as follows (my emphasis):

Benign prostate enlargement (BPE) is the medical term to describe an enlarged prostate, a condition that can affect how you pee (urinate).

BPE is common in men aged over 50. It’s not a cancer and it’s not usually a serious threat to health.

I knew there were medicines that might help, such as Tamsulosin (brand name “Flomax Relief”) – but that they had side-effects.

So I gave the matter little attention.

But two days ago, my problems passing urine suddenly became a lot worse. I had a constant desire to “go”, but an inability to produce more than the tiniest trickle (after a lot of, err, stressing and straining).

To complicate matters, I was away from home. With my wife and two other couples, I was meant to be enjoying a three day golfing holiday in the picturesque Wiltshire countryside.

Yesterday morning, having failed to reach my own GPs online or by phone, I called the NHS 111 service. To cut to the chase, I was advised to get to a hospital as soon as possible. They made an appointment for me at a hospital in Bath, around 30 minutes car journey distant. And soon after that, I was being examined by an excellent team of NHS staff.

When someone’s bladder is full, it’s normally around 400 to 600 ml in volume. Ultrasound scans showed there was around 900 ml of urine in my bladder. No wonder I was feeling so uncomfortable.

I hadn’t expected to be in hospital that day, but thank goodness I was there.

I’ll skip over all the phases of analysis and treatment, and just mention that I am now slightly more of a cyborg than before. I’ve had a cleverly engineered piece of plastic inserted into my body, allowing me to drain my bladder at will, using a tap at the end of a tube which protrudes. It’s called an indwelling catheter.

It’s most likely only a temporary solution, until my response to Tamsulosin (the drug mentioned earlier) is assessed.

For the time being, my mobility is restricted, until I get used to this new attachment.

And my mind is, how to put it, rather shaken at the turn of events.

But things could have been a great deal worse. I’m deeply grateful for the rapid, painstaking response of the dozen or so members of the Royal United Hospital Bath who took such good care of me.

In moments of lucidity during these hours, I reflected on how much we all depend on each other. Rugged individualism only goes so far.

In the meantime, I’ll move forward with at least some of my projects, including the online London Futurists events already scheduled. They include one on (guess what?) aging, in two weeks time, and one on “Cryptocurrencies for profound good?” taking place tomorrow.

Opening image credit: Wolfgang Eckert from Pixabay.

2 August 2021

Follow-ups from the future of Transhumanist Studies

Last Saturday’s London Futurists event experimented with the format.

After the by-now usual 90 minutes of speaker presentation and moderated Q&A, and a five-minute comfort break, the event transitioned into a new phase with informal on-camera audience discussion. Audience members who stayed on for this part of the meeting were all transformed from webinar viewers into panellists, and invited to add their voices into the discussion. Questions to seed the discussion were:

  • What did you particularly like about what you have heard?
  • What would you like to add into the discussion?
  • What might you suggest as a follow-up after the event?

The topic for the event as a whole was “The Future of Transhumanist Studies”. The speaker was Natasha Vita-More, the executive director of Humanity+. Natasha kindly agreed to stay on for the informal phase of the event and provided more insight in that phase too.

I’m appending, below, a copy of the video recording of the main part of the event. What I want to share now are my personal take-aways from the informal discussion phase. (That part wasn’t recorded, but I took notes.)

1. The importance of increments

Transhumanism has a vision of a significantly better future for humanity.

To be clear, it’s not a vision of some kind of perfection – some imagined state in which no change ever happens. Instead, it’s a vision of an open, dynamic journey forward. Max More has written eloquently about that point on many occasions over the years. See in particular the Principles of Extropy (v3.11) from 2003. Or this short summary from the chapter “True Transhumanism” in the 2011 book H+/-: Transhumanism and Its Critics:

Transhumanism is about continual improvement, not perfection or paradise.

Transhumanism is about improving nature’s mindless “design”, not guaranteeing perfect technological solutions.

Transhumanism is about morphological freedom, not mechanizing the body.

Transhumanism is about trying to shape fundamentally better futures, not predicting specific futures.

Transhumanism is about critical rationalism, not omniscient reason.

What arose during the discussion on Saturday were questions about possible incremental next steps along that envisioned journey.

In part, these were questions about what science and technology might be able to deliver in the next 2, 5, 10 years, and so on. It’s important to be able to speak in a credible manner about these possible developments, and to offer evidence supporting these forecasts.

But there were also questions about specific actions that transhumanists might be able to take in the coming months and years to improve public awareness of key transhumanist ideas.

One panellist posed the question as follows:

What are the immediate logical next steps across the Transhumanist agenda that could [achieve wider impact]?

The comment continued:

The problem I see with roadmaps generally… is that people always look at the end of the roadmap and think about the end point, not the incremental journey… People start planning around the final slide/item on the roadmap instead of buying into the bits in between while expecting everyone else to do the work to get us there. That usually results in people not buying the incremental steps which of course stifles progress.

That thought resonated with other participants. One added:

This is a crucial idea. A sense of urgency is hard to engender in long term issues.

I am reminded of the excellent analysis by Harvard Business School Professor John Kotter. Kotter has probably done more than anyone else to understand why change initiatives frequently fail – even when the people involved in these initiatives have many admirable qualities. Here are the eight reasons he identifies for change initiatives failing:

  1. Lack of a sufficient sense of urgency;
  2. Lack of an effective guiding coalition for the change (an aligned team with the ability to make things happen);
  3. Lack of a clear appealing vision of the outcome of the change (otherwise it may seem too vague, having too many unanswered questions);
  4. Lack of communication for buy-in, keeping the change in people’s mind (otherwise people will be distracted back to other issues);
  5. Lack of empowerment of the people who can implement the change (lack of skills, wrong organisational structure, wrong incentives, cumbersome bureaucracy);
  6. Lack of celebration of small early wins (failure to establish momentum);
  7. Lack of follow through (it may need wave after wave of change to stick);
  8. Lack of embedding the change at the cultural level (otherwise the next round of management reorgs can unravel the progress made).

Kotter’s positive suggestions for avoiding these failures can be summed up in a slide I’ve used in various forms many times in my presentations over the years:

That brings me back to the topic of incremental change – envisioning it, communicating it, enabling it, and celebrating it. If that’s not done, any sense of urgency and momentum behind a change initiative is likely to falter and stall.

That’s why a credible roadmap of potential incremental changes is such an important tool.

Watch out for more news on that front soon.

2. Transhumanism becoming mainstream

Here’s another line of discussion from the informal conversation at the end of Saturday’s event.

Many members of the public, if they know about transhumanism at all, tend to see it as other worldly. It’s the subject of science fiction, or something that might appear in eccentric video games. But it’s not something relevant to the real world any time soon.

Or they might think of transhumanism as something for academics to debate, using abstract terminology such as post-modernism, post-humanism, and (yes) trans-humanism. Again, not something with any real-world implications.

To transhumanists, on the other hand, the subject is highly relevant. It’s relevant to the lives of individuals, as it covers treatments and methods that can be applied, here and now, to improve our wellbeing – physically, rationally, emotionally, and socially. It can also provide an uplifting vision that transforms our understanding of our own personal role in steering a forthcoming mega-disruption.

Moreover, transhumanism is relevant to the real-world problems that, understandably, cause a great deal of concern – problems about the environment, social interactions, economics and politics, and the runaway adoption of technology.

As Albert Einstein said in 1946, “a new type of thinking is essential if mankind is to survive and move to higher levels”.

My own view is that transhumanism is the “new kind of thinking” that is, indeed, “essential” if we are to avoid the many dangerous landmines into which humanity currently risks sleepwalking.

That’s a core message of my recent book Vital Foresight: The Case For Active Transhumanism.

In that book, I emphasise that transhumanism isn’t some other worldly idea that’s in search of a question to answer. Instead, I introduce transhumanism as the solution of what I describe as eleven “landmines”.

Snippets of ideas about transhumanism are included in the early chapters of my book, but it’s not until Chapter 11 that I introduce the subject properly. That was a deliberate choice. I want to be clear that transhumanism can be seen as the emerging mainstream response to real-world issues and opportunities.

3. Academics who write about transhumanism

In some parts of the world, there are more people who study and write about transhumanism than who actively support transhumanist projects. That was another topic at the end of Saturday’s London Futurists event.

From my own reading, I recognise some of that academic work as being of high quality. For example, see the research of Professor Stefan Lorenz Sorgner from the History and Humanities department at John Cabot University in Rome. Sorgner featured in a London Futurists webinar a few months ago.

Another example of fine academic research into transhumanism is the 2018 PhD thesis of Elise Bohan of Macquarie University, Sydney, Australia: A History of Transhumanism.

On the other hand, there’s also a considerable amount of academic writing on transhumanism that is, frankly, of a shockingly poor quality. I stepped through some of that writing while preparing Chapter 12 of Vital Foresight – the chapter (“Antitheses”) where I evaluate criticisms of transhumanism.

What these critics often do is to imagine their own fantasy version of transhumanism, and then criticise it, with little anchoring to the actual transhumanist community. That is, they criticise “straw men” distortions of transhumanism.

In some cases, these critics latch onto individual statements of people loosely connected with transhumanism – for example, statements by the fictional character Jethro Knights in the novel The Transhumanist Wager – and wrongly assume that these statements are authoritative for the entire movement. (See here for my own review of The Transhumanist Wager.)

These critics often assert: “What transhumanists fail to consider is…” or “Transhumanists never raise the question that…” whereas, in fact, these very questions have been reviewed in depth, many times over, in transhumanist discussion lists.

From time to time, critics of transhumanism do raise some good points. I acknowledge a number of examples throughout Vital Foresight. What I want to consider now are the questions that were raised on Saturday:

  1. How can transhumanists keep on top of the seemingly growing number of academic articles about us?
  2. What is the best way to respond to the misunderstandings and distortions that we notice?
  3. As a good use for our time, how do interactions with these academics compare with trying to share transhumanist messages with more mainstream audiences?

To answer the third question first: ideas matter. Ideas can spread from initially obscure academic settings into wider contexts. Keeping an eye on these discussions could help us to address issues early.

Moreover, what we can surely find, in amongst the range of academic work that addresses transhumanism, are some really good expressions and thoughts that deserve prominence and attention. These thoughts might also cause us to have some “aha” realisations – about things we could, or should, start to do differently.

Flipping to the first question: many hands make light work. Rather than relying on a single person that tries to review all academic mentions of transhumanism, more of us should become involved in that task.

When we find an article that deserves more attention – whether criticism or praise – we can add it into pages on H+Pedia (creating new pages if necessary).

The main event

Now you’ve read the after thoughts, here’s a recording of the event itself. Enjoy!

27 May 2021

Twenty key themes in Vital Foresight

Filed under: Vital Foresight — Tags: , — David Wood @ 10:32 am

The scenarios that lie ahead for humanity – whether global destruction or sustainable superabundance – involve rich interactions of multiple streams of thought and activity. There’s a lot we need to get our heads around, including disruptions in technology, health, culture, economics, politics, education, and philosophy. Cutting corners on understanding any one of these streams could yield a seriously misleading picture of our options for the future. If we skimp on our analysis of future possibilities, we should not be surprised if humanity falls far short of our true potential.

That’s an extract from the Preface to my forthcoming new book Vital Foresight.

The book covers lots of ground that you won’t find anywhere else. Here are twenty examples:

(1) “A little foresight is a dangerous thing” – why many exercises in predicting the future end up making the future worse, rather than better.

(2) Insights from examples of seemingly bad foresight – what we can learn from looking more closely at past mis-forecasts of famines, plagues, climate change, fast progress with AI, war and peace, and terrorism.

(3) The eleven “landmines” (and “meta-landmines”) that pose the most threat of extensive damage to human civilisation. And how to avoid detonating any of them.

(4) “Shortsight” – The eight ways in which evolution has prepared us poorly to anticipate, evaluate, and steer the existential risks and existential opportunities that now confront us.

(5) “A little learning about disruption is a dangerous thing” – what most sets of recommendations get badly wrong when advocating disruption, exponentials, moonshots, and “accelerating returns”.

(6) “Surprise anticipation” – seven principles for managing the inevitable contingencies of any large transformation project.

(7) The design and use of canary signals, illustrated via the eleven landmines.

(8) “Hedgehogs, good, bad, and vital” – the importance, but also the danger, of having a single-minded vision for what the future can bring.

(9) What past sceptics of the potential for the Internet and distributed computing can teach us about the potential of the technologies of the fourth industrial revolution.

(10) “Technology overhang” – the special significance of inventions or breakthroughs that turn out to surprisingly fruitful. And why they complicate foresight.

(11) The multiple interconnections between the ‘N’, ‘B’, ‘I’, and ‘C’ quadrants of the NBIC convergence that is driving the fourth industrial revolution.

(12) Fifteen ways in which AI could change substantially over the next 5-10 years – even before AI reaches the level of AGI

(13) Why the “superlongevity”, “superintelligence”, and “superhappiness” aspirations of transhumanism need to be supplemented with “superdemocracy” and “supernarrative”

(14) Eight areas of the “transhumanist shadow” – attitudes and practices of people associated with the transhumanist movement that (rightly) attract criticism

(15) “Thirteen core transhumanist values” that underpin what I describe as “active transhumanism”, as a counter to the tendencies in the transhumanist shadow, and as the means to steer humanity toward the truly better future that lies within our grasp

(16) Sixteen criticisms of transhumanism that are unfair or confused, but which are worth exploring, since they enable a richer understanding of the issues and opportunities for transhumanism

(17) The applications of active transhumanism in both politics and geopolitics

(18) Six ways in which today’s educational systems await profound upgrades – and a proposed “vital syllabus” with twenty-one areas covering the skills everyone will need in the 2020s and beyond

(19) Examples of different kinds of potential forthcoming technological singularity, beyond simply the advent of AGI

(20) “The Singularity Principles” – 21 principles which are intended to provide the basis for practical policy recommendations, to guide society away from risks of a radically negative encounter with emergent technology toward the likelihood of a radically positive encounter.

That might make my book sound like a collection of check lists. But you’ll find that there are plenty of discursive narratives in the book too. I hope you’ll enjoy reading them.

And, by the way, the book has nearly 1000 footnotes, in case you want to follow up some of the material I have referenced.


Update on 23rd June 2021: Vital Foresight has now been published as an ebook and as a paperback.

Here are the Amazon links:

The open preview mentioned in this post has now ended.

For more details about the book, including endorsements by early readers, see here.

26 May 2021

A preview of Vital Foresight

Filed under: books, Vital Foresight — Tags: , , — David Wood @ 8:33 am

Update on 23rd June 2021: Vital Foresight has now been published as an ebook and as a paperback.

Here are the Amazon links:

The open preview mentioned in this post has now ended.

For more details about the book, including endorsements by early readers, see here.

The original blogpost follows:


Vital Foresight is almost ready.

That’s the title of the book I’ve been writing since August. It’s the most important book I’ve ever written.

The subtitle is The Case for Active Transhumanism.

Below, please find a copy of the Preface to Vital Foresight. The preface summarises the scope and intent of the book, and describes its target audience.

At this time, I am inviting people to take a look at previews of one or more of the chapters, and, if you feel inspired, to offer some feedback.

Here are examples of what I encourage you to make comments or suggestions about:

  • You particularly like some of the material
  • You dislike some of the material
  • You think contrary opinions should be considered
  • There appear to be mistakes in the spelling or grammar
  • The material is difficult to read or understand
  • The ideas could be expressed more elegantly
  • You have any other thoughts you wish to share.

Unless you indicate a preference for anonymity, reviewers will be thanked in the Acknowledgements section at the end of the book.

The chapters can be accessed as Google Doc files. Here’s the link to the starting point.

This article lists twenty key features of the book – topics it covers in unique ways.

And, for your convenience, here’s a copy of the Preface.

Preface

“Transhumanism”?

“Don’t put that word on the cover of your book!”

That’s the advice I received from a number of friends when they heard what I was writing about. They urged me to avoid “the ‘T’ word” – “transhumanism”. That word has bad vibes, they said. It’s toxic. T for toxic.

I understand where they’re coming from. Later in this book, I’ll dig into reasons why various people are uncomfortable with the whole concept. I’ll explain why I nevertheless see “transhumanism” as an apt term for a set of transformational ideas that will be key to our collective wellbeing in the 2020s and beyond. T for transformational. And, yes, T for timely.

As such, it’s a word that belongs on the cover of many more books, inspiring more conversations, more realisations, and more breakthroughs.

For now, in case you’re wondering, here’s a short definition. It’s by Oxford polymath Anders Sandberg, who expressed it like this in 1997:

Transhumanism is the philosophy that we can and should develop to higher levels, physically, mentally, and socially, using rational methods.

Sandberg’s 1997 webpage also features this summary from trailblazing Humanity+ Board member and Executive Director, Natasha Vita-More:

Transhumanism is a commitment to overcoming human limits in all their forms, including extending lifespan, augmenting intelligence, perpetually increasing knowledge, achieving complete control over our personalities and identities, and gaining the ability to leave the planet. Transhumanists seek to achieve these goals through reason, science, and technology.

In brief, transhumanism is a vision of the future: a vision of what’s possible, what’s desirable, and how it can be brought into reality.

In subsequent chapters, I’ll have lots more to say about the strengths and weaknesses of transhumanism. I’ll review the perceived threats and the remarkable opportunities that arise from it. But first, let me quickly introduce myself and how I came to be involved in the broader field of foresight (also known as futurism) within which transhumanism exists.

Smartphones and beyond

Over the twenty-five years that I held different roles within the mobile computing and smartphone industries, it was an increasingly central part of my job to think creatively and critically about future possibilities.

Back in the late 1980s and early 1990s, my work colleagues and I could see that computing technology was becoming ever more powerful. We debated long and hard, revisiting the same questions many times as forthcoming new hardware and software capabilities came to our attention. What kinds of devices should we design, to take advantage of these new capabilities? Which applications would users of these devices find most valuable? How might people feel as they interacted with different devices with small screens and compact keypads? Would the Internet ever become useful for “ordinary people”? Would our industry be dominated by powerful, self-interested corporations with monolithic visions, or would multiple streams of innovation flourish?

My initial involvement with these discussions was informal. Most of my time at work went into software engineering. But I enjoyed animated lunchtime discussions at Addison’s brasserie on Old Marylebone Road in central London, where technical arguments about, for example, optimising robust access to data structures, were intermingled with broader brainstorms about how we could collectively steer the future in a positive direction.

Over time, I set down more of my own ideas in writing, in emails and documents that circulated among teammates. I also had the good fortune to become involved in discussions with forward-thinking employees from giants of the mobile phone world – companies such as Nokia, Ericsson, Motorola, Panasonic, Sony, Samsung, Fujitsu, and LG, that were considering using our EPOC software (later renamed as “Symbian OS”) in their new handsets. I learned a great deal from these discussions.

By 2004 my job title was Executive VP for Research. It was my responsibility to pay attention to potential disruptions that could transform our business, either by destroying it, or by uplifting it. I came to appreciate that, in the words of renowned management consultant Peter Drucker, “the major questions regarding technology are not technical but human questions”. I also became increasingly persuaded that the disruptions of the smartphone market, significant though they were, were but a small preview of much larger disruptions to come.

As I’ll explain in the pages ahead, these larger disruptions could bring about a significant uplift in human character. Another possibility, however, is the destruction of much that we regard as precious.

Accordingly, the skills of foresight are more essential today than ever. We need to strengthen our collective capabilities in thinking creatively and critically about future possibilities – and in acting on the insights arising.

Indeed, accelerating technological change threatens to shatter the human condition in multiple ways. We – all of us – face profound questions over the management, not just of smartphones, but of artificial intelligence, nanoscale computers, bio-engineering, cognitive enhancements, ubiquitous robots, drone swarms, nuclear power, planet-scale geo-engineering, and much more.

What these technologies enable is, potentially, a world of extraordinary creativity, unprecedented freedom, and abundant wellbeing. That’s provided we can see clearly enough, in advance, the major disruptive opportunities we will need to seize and steer, so we can reach that destination. And provided we can step nimbly through a swath of treacherous landmines along the way.

That’s no small undertaking. It will take all our wisdom and strength. It’s going to require the very highest calibre of foresight.

That’s the reason why I’ve spent so much of my time in recent years organising and hosting hundreds of public meetings of the London Futurists community, both offline and online – events with the general headline of “serious analysis of radical scenarios for the next three to forty years”.

I acknowledge, however, that foresight is widely thought to have a poor track record. Forecasts of the future, whether foretelling doom and gloom, or envisioning technological cornucopia, seem to have been wrong at least as often as they have been right. Worse, instead of helping us to see future options more clearly, past predictions have, all too frequently, imposed mental blinkers, encouraged a stubborn fatalism, or distracted us from the truly vital risks and opportunities. It’s no wonder that the public reputation of futurism is scarcely better than that of shallow tabloid horoscopes.

To add to the challenge, our long-honed instincts about social norms and human boundaries prepare us poorly for the counterintuitive set of radical choices that emerging technology now dangles before us. We’re caught in a debilitating “future shock” of both fearful panic and awestruck wonder.

Happily, assistance is at hand. What this book will demonstrate is that vital foresight from the field I call active transhumanism can help us all:

  1. To resist unwarranted tech hype, whilst remaining aware that credible projections of today’s science and engineering could enable sweeping improvements in the human condition
  2. To distinguish future scenarios with only superficial attractions from those with lasting, sustainable benefits
  3. To move beyond the inaction of future shock, so we can coalesce around practical initiatives that advance deeply positive outcomes.

The audience for vital foresight

I’ve written this book for everyone who cares about the future:

  • Everyone trying to anticipate and influence the dramatic changes that may take place in their communities, organisations, and businesses over the next few years
  • Everyone concerned about risks of environmental disaster, the prevalence of irrationalism and conspiracy theories, growing inequality and social alienation, bioengineered pandemics, the decline of democracy, and the escalation of a Cold War 2.0
  • Everyone who has high hopes for technological solutions, but who is unsure whether key innovations can be adopted wisely enough and quickly enough
  • Everyone seeking a basic set of ethical principles suited for the increasing turbulence of the 2020s and beyond – principles that preserve the best from previous ethical frameworks, but which are open to significant updates in the wake of the god-like powers being bestowed on us by new technologies.

Although it reviews some pivotal examples from my decades of experience in business, this is not a book about the future of individual businesses or individual industries.

Nor is it a “get rich quick” book, or one that promotes “positive thinking” or better self-esteem. Look elsewhere, if that is what you seek.

Instead, it’s a book about the possibilities – indeed, the necessity – for radical transformation:

  • Transformation of human nature
  • Transformation of our social and political frameworks
  • Transformation of our relations with the environment and the larger cosmos
  • Transformation of our self-understanding – the narratives we use to guide all our activities.

Critically, this book contains practical suggestions for next steps to be taken, bearing in mind the power and pace of forces that are already remaking the world faster than was previously thought possible.

And it shows that foresight, framed well, can provide not only a stirring vision, but also the agility and resilience to cope with the many contingencies and dangers to be encountered on the journey forward.

Looking ahead

Here’s my summary of the most vital piece of foresight that I can offer.

Oncoming waves of technological change are poised to deliver either global destruction or a paradise-like sustainable superabundance, with the outcome depending on the timely elevation of transhumanist vision, transhumanist politics, and transhumanist education.

You’ll find that same 33-word paragraph roughly halfway through the book, in the chapter “Creativity”, in the midst of a dialogue about (can you guess…?) hedgehogs and foxes. I’ve copied the paragraph to the beginning of the book to help you see where my analysis will lead.

The summary is short, but the analysis will take some time. The scenarios that lie ahead for humanity – whether global destruction or sustainable superabundance – involve rich interactions of multiple streams of thought and activity. There’s a lot we’ll need to get our heads around, including disruptions in technology, health, culture, economics, politics, education, and philosophy. Cutting corners on understanding any one of these streams could yield a seriously misleading picture of our options for the future. Indeed, if we skimp on our analysis of future possibilities, we should not be surprised if humanity falls far short of our true potential.

However, I realise that each reader of this book will bring different concerns and different prior knowledge. By all means jump over various sections of the book to reach the parts that directly address the questions that are uppermost in your mind. Let the table of contents be your guide. If need be, you can turn back the pages later, to fill in any gaps in the narrative.

Better foresight springs, in part, from better hindsight. It’s particularly important to understand the differences between good foresight and bad foresight – to review past examples of each, learning from both the failures and, yes, the occasional successes of previous attempts to foresee and create the future. That’s one of our key tasks in the pages ahead.

In that quest, let’s move forward to an example from the rainbow nation of South Africa. Before we reach the hedgehogs and foxes, I invite you to spend some time with (can you guess…?) ostriches and flamingos.

==== Click here for the full preview, and to be able to make comments and suggestions ===

3 March 2021

The Viridian Manifesto: 4+2 Questions

Filed under: climate change, Events, green — Tags: , , — David Wood @ 2:32 pm

“Viridian” is an unusual word. I had to look it up in Wikipedia to check its meaning.

A few clicks took me to the page “Viridian design movement”, which is worth quoting in its entirety:

The Viridian Design Movement was an aesthetic movement focused on concepts from bright green environmentalism. The name was chosen to refer to a shade of green that does not quite look natural, indicating that the movement was about innovative design and technology, in contrast with the “leaf green” of traditional environmentalism.

The movement tied together environmental design, techno-progressivism, and global citizenship. It was founded in 1998 by Bruce Sterling, a postcyberpunk science fiction author. Sterling always remained the central figure in the movement, with Alex Steffen perhaps the next best-known. Steffen, Jamais Cascio, and Jon Lebkowsky, along with some other frequent contributors to Sterling’s Viridian notes, formed the Worldchanging blog. Sterling wrote the introduction to Worldchanging’s book (Worldchanging: A Users Guide for the 21st Century), which (according to Ross Robertson) is considered the definitive volume on bright green thinking.

Sterling formally closed the Viridian movement in 2008, saying there was no need to continue its work now that bright green environmentalism had emerged.

One more click reaches the original Viridian Manifesto, bearing the date 3rd January 2000, authored by Bruce Sterling. It’s a fascinating document. Here are some excerpts:

The central issue as the new millennium dawns is technocultural. There are of course other, more traditional, better-developed issues for humankind. Cranky fundamentalism festers here and there; the left is out of ideas while the right is delusional; income disparities have become absurdly huge; these things are obvious to all. However, the human race has repeatedly proven that we can prosper cheerfully with ludicrous, corrupt and demeaning forms of religion, politics and commerce. By stark contrast, no civilization can survive the physical destruction of its resource base. It is very clear that the material infrastructure of the twentieth century is not sustainable. This is the issue at hand.

We have a worldwide environmental problem…

The stark fact that our atmosphere is visibly declining is of no apparent economic interest except to insurance firms, who will simply make up their lack by gouging ratepayers and exporting externalized costs onto the general population.

With business hopeless and government stymied, we are basically left with cultural activism. The tools at hand are art, design, engineering, and basic science: human artifice, cultural and technical innovation…

The task at hand is therefore basically an act of social engineering. Society must become Green, and it must be a variety of Green that society will eagerly consume. What is required is not a natural Green, or a spiritual Green, or a primitivist Green, or a blood-and-soil romantic Green.

These flavours of Green have been tried, and have proven to have insufficient appeal. We can regret this failure if we like. If the semi-forgotten Energy Crisis of the 1970s had provoked a wiser and more energetic response, we would not now be facing a weather crisis. But the past’s well-meaning attempts were insufficient, and are now part of the legacy of a dying century.

The world needs a new, unnatural, seductive, mediated, glamorous Green. A Viridian Green, if you will…

I said that was the original viridian manifesto. The reason I started looking into the history was that I’ve been asked to help with the organisation of a “Viridian Conference” taking place on the 16th and 17th of this month, March 2021. And that conference is based around the provocations in a new Viridian Manifesto, authored this time by the French Transhumanist Association (l’AFT, also known as “Technoprog”).

This new manifesto exists both in French and in English. Correspondingly, of the two days of the Viridian Conference, the presentations and discussions will be in French on the 16th of March, and in English on the 17th of March. Attendees are welcome to sign up for either day – or both. Follow the registration links online.

The new manifesto sets out its scope in its opening paragraphs:

Technoprogressive transhumanists consider that it is essential to change our behaviour collectively and individually in order to stop global warming, the loss of biodiversity, and the consumption of non-renewable resources…

A “viridian” option, which is to say, one that is ecological, technological, and non-destructive to humanity, presupposes radical transitions…

The change necessary, such that any change is for the better, requires broad technological progress and profound societal shifts…

It goes on to list recommendations grouped under the following headings:

  • Collective research
  • Renewable energy
  • Reuse and remediation
  • Collective reflections and decisions

The conference on the 16th and 17th of March will contain presentations and interactive workshops designed to “deepen the Viridian declaration”, as well as to “develop transhumanist thinking on environmental issues”.

I’m roughly halfway through drafting my own proposed slides for the event, which currently have the title “Superdemocracy as the key enabler of a viridian future”.

As part of my research, I started doing some online searches. These searches have opened my eyes to what was a bigger history to the particular “veridian” idea than I had expected.

** To move forward, I propose that people interested in the conference consider four primary questions. **

First, which parts of the (new) viridian manifesto do you most like?

(If you had to summarise its most important and unique features in a couple of sentences, what would they be?)

Second, which parts (if any) of the manifesto do you disagree with?

(Are any of its recommendations misplaced or even dangerous?)

Third, which important topics are missing from the manifesto, that should be included?

(My own answer to this question involves the words “economics” and “politics”.)

Fourth, what are priorities for next steps to make progress with the recommendations in the manifesto?

(The answers to this mainly depend on how you answered the previous questions.)

And to help stimulate thinking, here are two more questions, that may well feature at the conference:

What should be learned from the experience of the original Viridian Manifesto?

(Was the sense of urgency in the 2000 manifesto misplaced? Was it wrong in its assessment of the capabilities of business and politics? And what are the key developments within “Bright Green Environmentalism” subsequent to 2008?)

What practical examples (that is, not just theories) back up ideas pro or con parts of the manifesto?

(Your feedback is welcome here!)

1 March 2021

The imminence of artificial consciousness

Filed under: AGI, books, brain simulation, London Futurists — Tags: , , — David Wood @ 10:26 am

I’ve changed my mind about consciousness.

I used to think that, of the two great problems about artificial minds – namely, achieving artificial general intelligence, and achieving artificial consciousness – progress toward the former would be faster than progress toward the latter.

After all, progress in understanding consciousness had seemed particularly slow, whereas enormous numbers of researchers in both academia and industry have been attaining breakthrough after breakthrough with new algorithms in artificial reasoning.

Over the decades, I’d read a number of books by Daniel Dennett and other philosophers who claimed to have shown that consciousness was basically already understood. There’s nothing spectacularly magical or esoteric about consciousness, Dennett maintained. What’s more, we must beware being misled by our own introspective understanding of our consciousness. That inner introspection is subject to distortions – perceptual illusions, akin to the visual illusions that often mislead us about what we think our eyes are seeing.

But I’d found myself at best semi-convinced by such accounts. I felt that, despite the clever analyses in such accounts, there was surely more to the story.

The most famous expression of the idea that consciousness still defied a proper understanding is the formulation by David Chalmers. This is from his watershed 1995 essay “Facing Up to the Problem of Consciousness”:

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect… There is something it is like to be a conscious organism. This subjective aspect is experience.

When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?

It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

However, as Wikipedia notes,

The existence of a “hard problem” is controversial. It has been accepted by philosophers of mind such as Joseph Levine, Colin McGinn, and Ned Block and cognitive neuroscientists such as Francisco Varela, Giulio Tononi, and Christof Koch. However, its existence is disputed by philosophers of mind such as Daniel Dennett, Massimo Pigliucci, Thomas Metzinger, Patricia Churchland, and Keith Frankish, and cognitive neuroscientists such as Stanislas Dehaene, Bernard Baars, Anil Seth and Antonio Damasio.

With so many smart people apparently unable to agree, what hope is there for a layperson to have any confidence in an answering the question, is consciousness already explained in principle, or do we need some fundamentally new insights?

It’s tempting to say, therefore, that the question should be left to one side. Instead of squandering energy spinning circles of ideas with little prospect of real progress, it would be better to concentrate on numerous practical questions: vaccines for pandemics, climate change, taking the sting out of psychological malware, protecting democracy against latent totalitarianism, and so on.

That practical orientation is the one that I have tried to follow most of the time. But there are four reasons, nevertheless, to keep returning to the question of understanding consciousness. A better understanding of consciousness might:

  1. Help provide therapists and counsellors with new methods to address the growing crisis of mental ill-health
  2. Change our attitudes towards the suffering we inflict, as a society, upon farm animals, fish, and other creatures
  3. Provide confidence on whether copying of memories and other patterns of brain activity, into some kind of silicon storage, could result at some future date in the resurrection of our consciousness – or whether any such reanimation would, instead, be “only a copy” of us
  4. Guide the ways in which systems of artificial intelligence are being created.

On that last point, consider the question whether AI systems will somehow automatically become conscious, as they gain in computational ability. Most AI researchers have been sceptical on that score. Google Maps is not conscious, despite all the profoundly clever things that it can do. Neither is your smartphone. As for the Internet as a whole, opinions are a bit more mixed, but again, the general consensus is that all the electronic processing happening on the Internet is devoid of the kind of subjective inner experience described by David Chalmers.

Yes, lots of software has elements of being self-aware. Such software contains models of itself. But it’s generally thought (and I agree, for what it’s worth) that such internal modelling is far short of subjective inner experience.

One prospect this raises is the dark possibility that humans might be superseded by AIs that are considerably more intelligent than us, but that such AIs would have “no-one at home”, that is, no inner consciousness. In that case, a universe with AIs instead of humans might have much more information processing, but be devoid of conscious feelings. Mega oops.

The discussion at this point is sometimes led astray by the popular notion that any threat from superintelligent AIs to human existence is predicated on these AIs “waking up” or become conscious. In that popular narrative, any such waking up might give an AI an additional incentive to preserve itself. Such an AI might adopt destructive human “alpha male” combative attitudes. But as I say, that’s a faulty line of reasoning. AIs might well be motivated to preserve themselves without ever gaining any consciousness. (Look up the concept of “basic AI drives” by Steve Omohundro.) Indeed, a cruise missile that locks onto a target poses a threat to that target, not because the missile is somehow conscious, but because it has enough intelligence to navigate to its target and explode on arrival.

Indeed, AIs can pose threats to people’s employment, without these AIs gaining consciousness. They can simulate emotions without having real internal emotions. They can create artistic masterpieces, using techniques such as GANs (Generative Adversarial Networks), without having any real psychological appreciation of the beauty of these works of art.

For these reasons, I’ve generally urged people to set aside the question of machine consciousness, and to focus instead on the question of machine intelligence. (For example, I presented that argument in Chapter 9 of my book Sustainable Superabundance.) The latter is tangible and poses increasing threats (and opportunities), whereas the former is a discussion that never seems to get off the ground.

But, as I mentioned at the start, I’ve changed my mind. I now think it’s possible we could have machines with synthetic consciousness well before we have machines with general intelligence.

What’s changed my mind is the book by Professor Mark Solms, The Hidden Spring: A Journey to the Source of Consciousness.

Solms is director of neuropsychology in the Neuroscience Institute of the University of Cape Town, honorary lecturer in neurosurgery at the Royal London Hospital School of Medicine, and an honorary fellow of the American College of Psychiatrists. He has spent his entire career investigating the mysteries of consciousness. He achieved renown within his profession for identifying the brain mechanisms of dreaming and for bringing psychoanalytic insights into modern neuroscience. And now his book The Hidden Spring is bringing him renown far beyond his profession. Here’s a selection of the praise it has received:

  • A remarkably bold fusion of ideas from psychoanalysis, psychology, and the frontiers of theoretical neuroscience, that takes aim at the biggest question there is. Solms will challenge your most basic beliefs.
    Matthew Cobb, author of The Idea of the Brain: The Past and Future of Neuroscience
  • At last the emperor has found some clothes! For decades, consciousness has been perceived as an epiphenomenon, little more than an illusion that can’t really make things happen. Solms takes a thrilling new approach to the problem, grounded in modern neurobiology but finding meaning in older ideas going back to Freud. This is an exciting book.
    Nick Lane, author of The Vital Question
  • To say this work is encyclopaedic is to diminish its poetic, psychological and theoretical achievement. This is required reading.
    Susie Orbach, author of In Therapy
  • Intriguing…There is plenty to provoke and fascinate along the way.
    Anil Seth, Times Higher Education
  • Solms’s efforts… have been truly pioneering. This unification is clearly the direction for the future.
    Eric Kandel, Nobel laureate for Physiology and Medicine
  • This treatment of consciousness and artificial sentience should be taken very seriously.
    Karl Friston, scientific director, Wellcome Trust Centre for Neuroimaging
  • Solms’s vital work has never ignored the lived, felt experience of human beings. His ideas look a lot like the future to me.
    Siri Hustvedt, author of The Blazing World
  • Nobody bewitched by these mysteries [of consciousness] can afford to ignore the solution proposed by Mark Solms… Fascinating, wide-ranging and heartfelt.
    Oliver Burkeman, Guardian
  • This is truly a remarkable book. It changes everything.
    Brian Eno

At times, I had to concentrate hard while listening to this book, rewinding the playback multiple times. That’s because the ideas kept sparking new lines of thought in my mind, which ran off in different directions as the narration continued. And although Solms explains his ideas in an engaging manner, I wanted to think through the deeper connections with the various fields that form part of the discussion – including psychoanalysis (Freud features heavily), thermodynamics (Helmholtz, Gibbs, and Friston), evolution, animal instincts, dreams, Bayesian statistics, perceptual illusions, and the philosophy of science.

Alongside the theoretical sections, the book contains plenty of case studies – from Solms’ own patients, and from other clinicians over the decades (actually centuries) – that illuminate the points being made. These studies involve people – or animals – with damage to parts of their brains. The unusual ways in which these subjects behave – and the unusual ways in which they express themselves – provide insight on how consciousness operates. Particularly remarkable are the children born with hydranencephaly – that is, without a cerebral cortex – but who nevertheless appear to experience feelings.

Having spent two weeks making my way through the first three quarters of the book, I took the time yesterday (Sunday) to listen to the final quarter, where there were several climaxes following on top of each other – addressing at length the “Hard Problem” ideas of David Chalmers, and the possibility of artificial consciousness.

It’s challenging to summarise such a rich set of ideas in just a few paragraphs, but here are some components:

  • To understand consciousness, the subcortical brain stem (an ancient part of our anatomy) is at least as important as the cognitive architecture of the cortex
  • To understand consciousness, we need to pay attention to feelings as much as to memories and thought processing
  • Likewise, the chemistry of long-range neuromodulators is at least as important as the chemistry of short-range neurotransmitters
  • Consciousness arises from particular kinds of homeostatic systems which are separated from their environment by a partially permeable boundary: a structure known as a “Markov blanket”
  • These systems need to take actions to preserve their own existence, including creating an internal model of their external environment, monitoring differences between incoming sensory signals and what their model predicted these signals would be, and making adjustments so as to prevent these differences from escalating
  • Whereas a great deal of internal processing and decision-making can happen automatically, without conscious thought, some challenges transcend previous programming, and demand greater attention

In short, consciousness arises from particular forms of information processing. (Solms provides good reasons to reject the idea that there is a basic consiciousness latent in all information, or, indeed, in all matter.) Whilst more work requires to be done to pin down the exact circumstances in which consciousness arises, this project is looking much more promising now, than it did just a few years ago.

This is no idle metaphysics. The ideas can in principle be tested by creating artificial systems that involve particular kinds of Markov blankets, uncertain environments that pose existential threats to the system, diverse categorical needs (akin to the multiple different needs of biologically conscious organisms), and layered feedback loops. Solms sets out a three-stage process whereby such systems could be built and evolved, in a relatively short number of years.

But wait. All kinds of questions arise. Perhaps the most pressing one is this: If such systems can be built, should we build them?

That “should we” question gets a lot of attention in the closing sections of the book. We might end up with AIs that are conscious slaves, in ways that we don’t have to worry about for our existing AIs. We might create AIs that feel pain beyond that which any previous conscious being has ever experienced it. Equally, we might create AIs that behave very differently from those without consciousness – AIs that are more unpredictable, more adaptable, more resourceful, more creative – and more dangerous.

Solms is doubtful about any global moratorium on such experiments. Now that the ideas are out of the bag, so to speak, there will be many people – in both academia and industry – who are motivated to do additional research in this field.

What next? That’s a question that I’ll be exploring this Saturday, 6th March, when Mark Solms will be speaking to London Futurists. The title of his presentation will be “Towards an artificial consciousness”.

For more details of what I expect will be a fascinating conversation – and to register to take part in the live question and answer portion of the event – follow the links here.

29 December 2020

The best book on the science of aging in the last ten years

Filed under: aging, books, rejuveneering, science, The Abolition of Aging — Tags: , — David Wood @ 10:44 am

Science points to many possibilities for aging to be reversed. Within a few decades, medical therapies based on these possibilities could become widespread and affordable, allowing all of us, if we wish, to remain in a youthful state for much longer than is currently the norm – perhaps even indefinitely. Instead of healthcare systems continuing to consume huge financial resources in order to treat people with the extended chronic diseases that become increasingly common as patients’ bodies age, much smaller expenditure would keep all of us much healthier for the vast majority of the time.

Nevertheless, far too many people fail to take these possibilities seriously. They believe that aging is basically inevitable, and that people who say otherwise are deluded and/or irresponsible.

Public opinion matters. Investments made by governments and by businesses alike are heavily influenced by perceived public reaction. Without active public support for smart investments in support of the science and medicine that could systematically reverse aging, that outcome will be pushed backwards in time – perhaps even indefinitely.

What can change this public opinion? An important part of the answer is to take the time to explain the science of aging in an accessible, engaging way – including the many recent experimental breakthroughs that, collectively, show such promise.

That’s exactly what Dr Andrew Steele accomplishes in his excellent book Ageless: The new science of getting older without getting old.

The audio version of this book became available on Christmas Eve, narrated by Andrew himself. It has been a delight to listen to it over the intervening days.

Over the last few years, I’ve learned a great deal from a number of books that address the science of aging, and I’ve been happy to recommend these books to wider audiences. These include:

But I hope that these esteemed authors won’t mind if I nominate Andrew Steele’s book as a better starting point into the whole subject. Here’s what’s special about it:

  • It provides a systematic treatment of the science, showing clear relationships between the many different angles to what is undeniably a complex subject
  • The way it explains the science seems just right for the general reader with a good basic education – neither over-simplified or over-dense
  • There’s good material all the way through the book, to keep readers turning the pages
  • The author is clearly passionate about his research, seeing it as important, but he avoids any in-your-face evangelism
  • The book avoids excessive claims or hyperbole: the claims it makes are, in my view, always well based
  • Where research results have been disappointing, there’s no attempt to hide these or gloss over them
  • The book includes many interesting anecdotes, but the point of these stories is always the science, rather than the personalities or psychologies of the researchers involved, or clashing business interests, or whatever
  • The information it contains is right up to date, as of late 2020.

Compared to other research, Ageless provides a slightly different decomposition of what is known as the hallmarks of aging, offering ten in total:

  1. DNA damage and mutations
  2. Trimmed telomeres
  3. Protein problems: autophagy, amyloids and adducts
  4. Epigenetic alterations
  5. Accumulation of senescent cells
  6. Malfunctioning mitochondria
  7. Signal failure
  8. Changes in the microbiome
  9. Cellular exhaustion
  10. Malfunction of the immune system

As the book points out, there are three criteria for something to be a useful “hallmark of aging”:

  1. It needs to increase with age
  2. Accelerating a hallmark’s progress should accelerate aging
  3. Reducing the hallmark should decrease aging

The core of the book is a fascinating survey of interventions that could reduce each of these hallmarks and thereby decrease aging – that is, decrease the probability of dying in the next year. These interventions are grouped into four categories:

  1. Remove
  2. Replace
  3. Repair
  4. Reprogram

Each category of intervention is in turn split into several subgroups. Yes, the treatment of aging is likely to be complicated. However, there are plenty of examples in which single interventions turned out to have multiple positive effects on different hallmarks of aging.

There are a couple of points where some readers might quibble with the content, for example regarding dietary supplements, or whether the concept of group selection can ever be useful within evolutionary theory.

However, my own presentations on the subject of the abolition of aging will almost certainly evolve in the light of the framework and examples in Ageless. I’m much the wiser from reading it.

Here’s my advice to anyone who, like me, believes the subject of reversing aging is important, and who wishes to accelerate progress in this field:

  • Read Ageless with some care, all the way through
  • Digest its contents and explore the implications, for example via discussion in online groups
  • Recommend others to read it too.

Ideally, a sizeable proportion of the book’s readers will alter their own research or other activity, in order to assist the projects covered in Ageless.

Finally, a brief comparison between Ageless and the remarkable grandfather book of this whole field: Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime, authored by Aubrey de Grey and Michael Rae. Ending Aging was published in 2007 and remains highly relevant, even though numerous experimental findings and new ideas have emerged since its publication. There’s a deep overlap in the basic approach advocated in the two books. Both books are written by polymaths who are evidently very bright – people who, incidentally, did their first research in fields outside biology, and who brought valuable external perspectives to the field.

So I see Ageless as a worthy successor to Ending Aging. Indeed, it’s probably a better starting point for people less familiar with this field, in view of its coverage of important developments since 2007, and some readers may find Andrew’s writing style more accessible.

19 September 2020

The touchiest subject

Filed under: cryonics, Events — Tags: , — David Wood @ 8:59 am

Imagine. You’re in a car journey with your family, in the midst of the countryside, far from the nearest town or village.

Out of the blue, your car is struck by a huge lorry. By a quirk of fate, no one is hurt in the accident, except for your beloved young daughter, who should be celebrating her third birthday in a month’s time. The collision has impacted your daughter badly.

Your travelling companion knows a thing or two about medical injuries. It looks very serious, he says. Your daughter will probably die. But if she can be transported, quickly, to a hospital, there’s a chance – admittedly a slim chance – she might be saved. Let’s call for an ambulance, he urges you.

But you oppose the idea. We should not play God, you say. “My daughter has already lived. If we had this accident just outside the hospital, we could have used its facilities. But we’re in the wrong place. Let’s peacefully cradle my daughter in our arms, and think happy thoughts. Perhaps we’ll meet her again, in a life after death.”

Your travelling companion is astounded. You have a chance to continue living with your daughter, for many years, if only you take the necessary steps to summon an ambulance. Act now!

However, your mind is set. It would be selfish to divert scarce healthcare resources, all the way from that distant city, just on the off-chance that operations on your daughter might repair her damaged body.

Indeed, you suspect the whole ambulance thing is a bit of a scam. These ambulance companies are just in it to make money, you tell yourself.

Ridiculous? Hold that thought.

Something pretty similar happened in Thailand a few years ago. The young girl in question wasn’t in a traffic accident, but had an aggressive brain cancer. The problem wasn’t that the incident happened in the wrong place (that is, far from a hospital). The problem was that the incident happened at the wrong time – that is, at a time before a cure for that cancer is available.

And instead of the option of summoning an actual ambulance – with the slim chance of the daughter surviving the long journey to the hospital and then recovering from arduous surgery – in this case, the option in question is cryonic preservation.

The story is told in a Netflix documentary that was released a few days ago, “Hope Frozen: A quest to live twice”. Here’s the trailer:

To my mind, the film is sensitively made. It provides an engaging, accessible introduction to cryonics.

But cryonics is a heck of a touchy subject.

At the time of writing, there’s only one review of Hope Frozen on IMDb.

The film is “very well” made, the reviewer accepts. However, the main proposition is, apparently, “just silly”:

Just silly in many different ways. Their daughter died. End of story. The parents and doctors are trying to play God. She wasn’t even 3 years old…

Over the years I’ve been listening to people’s opinions about cryonics, I’ve often heard similar dismissals. (And far worse.)

It’s hard to think about death. The prospect of oblivion – for ourselves, or for our loved ones, is terrifying. We humans do all sorts of things to buffer ourselves from having to contemplate death, including telling ourselves various stories that provide us with some kind of comfort.

Psychologists have a lot to say about this topic. Look up “Terror Management Theory” on the Internet. (E.g. here on Wikipedia.)

I dedicated an entire chapter, “Adverse Psychology”, of my 2016 book The Abolition of Aging, to that topic, so I won’t say much more about it here. What I will say, now, is that it can be terrifying for each of us to have to think this sequence of thoughts:

  • There are some actions I could take, that would give myself, and my loved ones, the chance to live again, following what would otherwise be a fatal illness
  • But I’m not taking these actions. That must make me… foolish, or irresponsible, or lacking love for my family members
  • By my inactions, I’m actually sending people I love into a state of oblivion that isn’t actually necessary
  • But I don’t like to think of myself as foolish, irresponsible, or lacking love.

Conclusion: cryonics must be bunk. Now let’s scrabble around to find evidence in support of that convenient conclusion.

To be clear, the chances for cryonics being successful are, indeed, tough to estimate. No-one can be sure:

  • What new technological options may become available in the future
  • How critical is the damage that occurs to the body (especially to the brain) during the cryonics preservation process
  • What may happen during the long decades (or centuries) in between cryopreservation and a potential future reanimation.

However, my view is that the chances of success are nonzero. Perhaps one in ten. Perhaps lower. Perhaps higher.

Happily, serious people are carrying out research to understand more fully a wide range of options regarding cryonics.

If you’re interested to listen to a variety of viewpoints about these questions, from people who have thought long and hard about the topic, you should consider attending Biostasis 2020. It’s entirely online.

A number of free tickets are still available. Click here for more information and to register.

The conference is being organised by the European Biostasis Foundation (EBF), a Basel-based non-profit foundation.

Speakers at the conference will include a host of fascinating people

They include Aaron Drake, who was featured in the film Hope Frozen – and whose comments, in that film, I found refreshing and highly informative.

Oh, I’ll be speaking too. My subject will be “Anticipating changing attitudes towards death and cryonics”.

I’m expecting to learn a great deal during these two days!

31 July 2020

The future of AI: 12 possible breakthroughs, and beyond

Filed under: AGI, books, disruption — Tags: , , , , — David Wood @ 1:30 pm

The AI of 5-10 years time could be very different from today’s AI. The most successful AI systems of that time will not simply be extensions of today’s deep neural networks. Instead, they are likely to include significant conceptual breakthroughs or other game-changing innovations.

That was the argument I made in a presentation on Thursday to the Global Data Sciences and Artificial Intelligence meetup. The chair of that meetup, Pramod Kunji, kindly recorded the presentation.

You can see my opening remarks in this video:

A copy of my slides can be accessed on Slideshare.

The ideas in this presentation raise many important questions, for which there are, as yet, only incomplete answers.

Indeed, the future of AI is a massive topic, touching nearly every area of human life. The greater the possibility that AI will experience cascading improvements in capability, the greater the urgency of exploring these scenarios in advance. In other words, the greater the need to set aside hype and predetermined ideas, in order to assess matters objectively and with an independent mind.

For that reason, I’ve joined with Rohit Talwar of Fast Future and Ben Goertzel of SingularityNET in a project to commission and edit chapters in a forthcoming book, “The Future of AI: Pathways to Artificial General Intelligence”.

forward-2083419_1920

We’re asking AI researchers, practitioners, analysts, commentators, policy makers, investors, futurists, economists, and writers from around the world, to submit chapters of up to 1,000 words, by the deadline of 15th September, that address one or more of the following themes:

  • Capability, Applications, and Impacts
    • How might the capabilities of AI systems evolve in the years ahead?
    • What can we anticipate about the potential evolution from today’s AI to AGI and beyond, in which software systems will match or exceed human cognitive abilities in every domain of thought?
    • What possible scenarios for the emergence of significantly more powerful AI deserve the most attention?
    • What new economic concepts, business models, and intellectual property ownership frameworks might be enabled and required as a result of advances that help us transition from today’s AI to AGI?
  • Pathways to AGI
    • What incremental steps might help drive practical commercial and humanitarian AI applications in the direction of AGI?
    • What practical ideas and experiences can be derived from real-world applications of technologies like transfer learning, unsupervised and reinforcement learning, and lifelong learning?
    • What are the opportunities and potential for “narrow AGI” applications that bring increasing levels of AGI to bear within specific vertical markets and application areas?
  • Societal Readiness
    • How can we raise society-wide awareness and understanding of the underlying technologies and their capabilities?
    • How can governments, businesses, educators, civil society organizations, and individuals prepare for the range of possible impacts and implications?
    • What other actions might be taken by individuals, by local groups, by individual countries, by non-governmental organizations (NGOs), by businesses, and by international institutions, to help ensure positive outcomes with advanced AI? How might we reach agreement on what constitutes a positive societal outcome in the context of AI and AGI?
  • Governance
    • How might societal ethical frameworks need to evolve to cope with the new challenges and opportunities that AGI is likely to bring?
    • What preparations can be made, at the present time, for the introduction and updating of legal and political systems to govern the development and deployment of AGI?

For more details of this new book, the process by which chapters will be selected, and processing fees that may apply, click here.

I’m very much looking forward to the insights that will arise – and to the critical new questions that will no doubt arise along the way.

 

19 June 2020

Highlighting probabilities

Filed under: communications, education, predictability, risks — Tags: , , — David Wood @ 7:54 pm

Probabilities matter. If society fails to appreciate probabilities, and insists on seeing everything in certainties, a bleak future awaits us all (probably).

Consider five predictions, and common responses to these predictions.

Prediction A: If the UK leaves the EU without a deal, the UK will experience a significant economic downturn.

Response A: We’ve heard that prediction before. Before the Brexit vote, it was predicted that a major economic downturn would happen straightaway if the result was “Leave”. That downturn failed to take place. So we can discard the more recent prediction. It’s just “Project Fear” again.

Prediction B (made in Feb 2020): We should anticipate a surge in infections and deaths from Covid-19, and take urgent action to prevent transmissions.

Response B: We’ve heard that prediction before. Bird flu was going to run havoc. SARS and MERS, likewise, were predicted to kill hundreds of thousands. These earlier predictions were wrong. So we can discard the more recent prediction. It’s just “Project Pandemic” again.

Prediction C: We should prepare for the advent of artificial superintelligence, the most disruptive development in all of human history.

Response C: We’ve heard that prediction before. AIs more intelligent than humans have often been predicted. No such AI has been developed. These earlier predictions were wrong. So there’s no need to prepare for ASI. It’s just “Project Hollywood Fantasy” again.

Prediction D: If we don’t take urgent action, the world faces a disaster from global warming.

Response D: We’ve heard that prediction before. Climate alarmists told us some time ago “you only have twelve years to save the planet”. Twelve years passed, and the planet is still here. So we can ignore what climate alarmists are telling us this time. It’s just “Project Raise Funding for Climate Science” again.

Prediction E (made in mid December 1903): One day, humans will fly through the skies in powered machines that are heavier than air.

Response E: We’ve heard that prediction before. All sorts of dreamers and incompetents have naively imagined that the force of gravity could be overcome. They have all come to ruin. All these projects are a huge waste of money. Experts have proved that heavier than air flying machines are impossible. We should resist this absurdity. It’s just “Langley’s Folly” all over again.

The vital importance of framing

Now, you might think that I write these words to challenge the scepticism of the people who made the various responses listed. It’s true that these responses do need to be challenged. In each case, the response involves an unwarranted projection from the past into the future.

But the main point on my mind is a bit different. What I want to highlight is the need to improve how we frame and present predictions.

In all the above cases – A, B, C, D, E – the response refers to previous predictions that sounded similar to the more recent ones.

Each of these earlier predictions should have been communicated as follows:

  • There’s a possible outcome we need to consider. For example, the possibility of an adverse economic downturn immediately after a “Leave” vote in the Brexit referendum.
  • That outcome is possible, though not inevitable. We can estimate a rough probability of it happening.
  • The probability of the outcome will change if various actions are taken. For example, swift action by the Bank of England, after a Leave vote, could postpone or alleviate an economic downturn. Eventually leaving the EU, especially without a deal in place, is likely to accelerate and intensify the downturn.

In other words, our discussions of the future need to embrace uncertainty, and need to emphasise how human action can alter that uncertainty.

What’s more, the mention of uncertainty must be forceful, rather than something that gets lost in small print.

So the message itself must be nuanced, but the fact that the message is nuanced must be underscored.

All this makes things more complicated. It disallows any raw simplicity in the messaging. Understandably, many activists and enthusiasts prefer simple messages.

However, if a message has raw simplicity, and is subsequently seen to be wrong, observers will be likely to draw the wrong conclusion.

That kind of wrong conclusion lies behind each of flawed responses A to E above.

Sadly, lots of people who are evidently highly intelligent fail to take proper account of probabilities in assessing predictions of the future. At the back of their minds, an argument like the following holds sway:

  • An outcome predicted by an apparent expert failed to materialise.
  • Therefore we should discard anything else that apparent expert says.

Quite likely the expert in question was aware of the uncertainties affecting their prediction. But they failed to emphasise these uncertainties strongly enough.

Transcending cognitive biases

As we know, we humans are prey to large numbers of cognitive biases. Even people with a good education, and who are masters of particular academic disciplines, regularly fall foul of these biases. They seem to be baked deep into our brains, and may even have conveyed some survival benefit, on average, in times long past. In the more complicated world we’re now living in, we need to help each other to recognise and resist the ill effects of these biases. Including the ill effects of the “probability neglect” bias which I’ve been writing about above.

Indeed, one of the most important lessons from the current chaotic situation arising from the Covid-19 pandemic is that society in general needs to raise its understanding of a number of principles related to mathematics:

  • The nature of exponential curves – and how linear thinking often comes to grief, in failing to appreciate exponentials
  • The nature of probabilities and uncertainties – and how binary thinking often comes to grief, in failing to appreciate probabilities.

This raising of understanding won’t be easy. But it’s a task we should all embrace.

Image sources: Thanasis Papazacharias and Michel Müller from Pixabay.

Footnote 1: The topic of “illiteracy about exponentials and probabilities” is one I’ll be mentioning in this Fast Future webinar taking place on Sunday evening.

Footnote 2: Some people who offer a rationally flawed response like the ones above are, sadly, well aware of the flawed nature of their response, but they offer it anyway. They do so since they believe the response may well influence public discussion, despite being flawed. They put a higher value on promoting their own cause, rather than on keeping the content of the debate as rational as possible. They don’t mind adding to the irrationality of public discussion. That’s a topic for a separate discussion, but it’s my view that we need to find both “carrots” and “sticks” to discourage people from deliberately promoting views they know to be irrational. And, yes, you guessed it, I’ll be touching on that topic too on Sunday evening.

Older Posts »

Blog at WordPress.com.