dw2

25 August 2025

The biggest blockages to successful governance of advanced AI

“Humanity has never faced a greater problem than itself.”

That phrase was what my brain hallucinated, while I was browsing the opening section of the Introduction of the groundbreaking new book Global Governance of the Transition to Artificial General Intelligence written by my friend and colleague Jerome C. Glenn, Executive Director of The Millennium Project.

I thought to myself: That’s a bold but accurate way of summing up the enormous challenge faced by humanity over the next few years.

In previous centuries, our biggest problems have often come from the environment around us: deadly pathogens, devastating earthquakes, torrential storms, plagues of locusts – as well as marauding hordes of invaders from outside our local neighbourhood.

But in the second half of the 2020s, our problems are being compounded as never before by our own human inadequacies:

  • We’re too quick to rush to judgement, seeing only parts of the bigger picture
  • We’re too loyal to the tribes to which we perceive ourselves as belonging
  • We’re overconfident in our ability to know what’s happening
  • We’re too comfortable with manufacturing and spreading untruths and distortions
  • We’re too bound into incentive systems that prioritise short-term rewards
  • We’re too fatalistic, as regards the possible scenarios ahead.

You may ask, What’s new?

What’s new is the combination of these deep flaws in human nature with technology that is remarkably powerful yet opaque and intractable. AI that is increasingly beyond our understanding and beyond our control is being coupled in potentially devastating ways with our over-hasty, over-tribal, over-confident thoughts and actions. New AI systems are being rushed into deployment and used in attempts:

  • To manufacture and spread truly insidious narratives
  • To incentivize people around the world to act against their own best interests, and
  • To resign people to inaction when in fact it is still within their power to alter and uplift the trajectory of human destiny.

In case this sounds like a counsel of despair, I should clarify at once my appreciation of aspects of human nature that are truly wonderful, as counters to the negative characteristics that I have already mentioned:

  • Our thoughtfulness, that can counter rushes to judgement
  • Our collaborative spirit, that can transcend partisanship
  • Our wisdom, that can recognise our areas of lack of knowledge or lack of certainty
  • Our admiration for truth, integrity, and accountability, that can counter ends-justify-the-means expediency
  • Our foresight, that can counter short-termism and free us from locked-in inertia
  • Our creativity, to imagine and then create better futures.

Just as AI can magnify the regrettable aspects of human nature, so also it can, if used well, magnify those commendable aspects.

So, which is it to be?

The fundamental importance of governance

The question I’ve just asked isn’t a question that can be answered by individuals alone. Any one group – whether an organisation, a corporation, or a decentralised partnership – can have its own beneficial actions overtaken and capsized by catastrophic outcomes of groups that failed to heed the better angels of their nature, and which, instead, allowed themselves to be governed by wishful naivety, careless bravado, pangs of jealousy, hostile alienation, assertive egotism, or the madness of the crowd.

That’s why the message of this new book by Jerome Glenn is so timely: the processes of developing and deploying increasingly capable AIs are something that needs to be:

  • Governed, rather than happening chaotically
  • Globally coordinated, rather than there being no cohesion between the different governance processes applicable in different localities
  • Progressed urgently, without being shut out of mind by all the shorter-term issues that, understandably, also demand governance attention.

Before giving more of my own thoughts about this book, let me share some of the commendations it has received:

  • “This book is an eye-opening study of the transition to a completely new chapter of history.” – Csaba Korösi, 77th President of the UN General Assembly
  • “A comprehensive overview, drawing both on leading academic and industry thinkers worldwide, and valuable perspectives from within the OECD, United Nations.” – Jaan Tallinn, founding engineer, Skype and Kazaa; co-founder, Cambridge Centre for the Study of Existential Risk and the Future of Life Institute
  • “Written in lucid and accessible language, this book is a must read for people who care about the governance and policy of AGI.” – Lan Xue, Chair of the Chinese National Expert Committee on AI Governance.

The book also carries an absorbing foreword by Ben Goertzel. In this foreword, Ben introduces himself as follows:

Since the 1980s, I have been immersed in the field of AI, working to unravel the complexities of intelligence and to build systems capable of emulating it. My journey has included introducing and popularizing the concept of AGI, developing innovative AGI software frameworks such as OpenCog, and leading efforts to decentralize AI development through initiatives like SingularityNET and the ASI Alliance. This work has been driven by an understanding that AGI is not just an engineering challenge but a profound societal pivot point – a moment requiring foresight, ethical grounding, and global collaboration.

He clarifies why the subject of the book is so important:

The potential benefits of AGI are vast: solutions to climate change, the eradication of diseases, the enrichment of human creativity, and the possibility of postscarcity economies. However, the risks are equally significant. AGI, wielded irresponsibly or emerging in a poorly aligned manner, could exacerbate inequalities, entrench authoritarianism, or unleash existential dangers. At this critical juncture, the questions of how AGI will be developed, governed, and integrated into society must be addressed with both urgency and care.

The need for a globally participatory approach to AGI governance cannot be overstated. AGI, by its nature, will be a force that transcends national borders, cultural paradigms, and economic systems. To ensure its benefits are distributed equitably and its risks mitigated effectively, the voices of diverse communities and stakeholders must be included in shaping its development. This is not merely a matter of fairness but a pragmatic necessity. A multiplicity of perspectives enriches our understanding of AGI’s implications and fosters the global trust needed to govern it responsibly.

He then offers wide praise for the contents of the book:

This is where the work of Jerome Glenn and The Millennium Project may well prove invaluable. For decades, The Millennium Project has been at the forefront of fostering participatory futures thinking, weaving together insights from experts across disciplines and geographies to address humanity’s most pressing challenges. In Governing the Transition to Artificial General Intelligence, this expertise is applied to one of the most consequential questions of our time. Through rigorous analysis, thoughtful exploration of governance models, and a commitment to inclusivity, this book provides a roadmap for navigating the complexities of AGI’s emergence.

What makes this work particularly compelling is its grounding in both pragmatism and idealism. It does not shy away from the technical and geopolitical hurdles of AGI governance, nor does it ignore the ethical imperatives of ensuring AGI serves the collective good. It recognizes that governing AGI is not a task for any single entity but a shared responsibility requiring cooperation among nations, corporations, civil society, and, indeed, future AGI systems themselves.

As we venture into this new era, this book reminds us that the transition to AGI is not solely about technology; it is about humanity, and about life, mind, and complexity in general. It is about how we choose to define intelligence, collaboration, and progress. It is about the frameworks we build now to ensure that the tools we create amplify the best of what it means to be human, and what it means to both retain and grow beyond what we are.

My own involvement

To fill in some background detail: I was pleased to be part of the team that developed the set of 22 critical questions which sat at the heart of the interviews and research which are summarised in Part I of the book – and I conducted a number of the resulting interviews. In parallel, I explored related ideas via two different online Transpolitica surveys:

And I’ve been writing roughly one major article (or giving a public presentation) on similar topics every month since then. Recent examples include:

Over this time period, my views have evolved. I see the biggest priority, nowadays, not as figuring out how to govern AGI as it comes into existence, but rather, how to pause the development and deployment of any new types of AI that could spark the existence of self-improving AGI.

That global pause needs to last long enough that the global community can justifiably be highly confident that any AGI that will subsequently be built will be what I have called a BGI (a Beneficial General Intelligence) rather than a CGI (a Catastrophic General Intelligence).

Govern AGI and/or Pause the development of AGI?

I recently posted a diagram on various social media platforms to illustrate some of the thinking behind that stance of mine:

Alongside that diagram, I offered the following commentary:

The next time someone asks me what’s my p(Doom), compared with my p(SSfA) (the probability of Sustainable Superabundance for all), I may try to talk them through a diagram like this one. In particular, we need to break down the analysis into two cases – will the world keep rushing to build AGI, or will it pause from that rush.

To explain some points from the diagram:

We can reach the very desirable future of SSfA by making wise use of AI only modestly more capable than what we have today;
We might also get there as a side-effect of building AGI, but that’s very risky.

None of the probabilities are meant to be considered precise. They’re just ballpark estimates.

I estimate around 2/3 chance that the world will come to its senses and pause its current headlong rush toward building AGI.

But even in that case, risks of global catastrophe remain.

The date 2045 is also just a ballpark choice. Either of the “singularity” outcomes (wonderful or dreadful) could arrive a lot sooner than that.

The 1/12 probability I’ve calculated for “stat” (I use “stat” here as shorthand for a relatively unchanged status quo) by 2045 reflects my expectation of huge disruptions ahead, one sort or another.

The overall conclusion: if we want SSfA, we’re much more likely to get it via the “pause AGI” branch than via the “headlong rush to AGI” branch.

And whilst doom is possible in either branch, it’s much more likely in the headlong rush branch.

For more discussion of how to get the best out of AI and other cataclysmically disruptive technologies, see my book The Singularity Principles (the entire contents are freely available online).

Feel free to post your own version of this diagram, with your own estimates of the various conditional probabilities.

As indicated, I was hoping for feedback, and I was pleased to see a number of comments and questions in response.

One excellent question was this, by Bill Trowbridge:

What’s the difference between:
(a) better AI, and
(b) AGI

The line is hard to draw. So, we’ll likely just keep making better AI until it becomes AGI.

I offered this answer:

On first thought, it may seem hard to identify that distinction. But thankfully, we humans don’t just throw up our hands in resignation every time we encounter a hard problem.

For a good starting point on making the distinction, see the ideas in “A Narrow Path” by Control AI.

But what surprised me the most was the confidence expressed by various online commenters that:

  • “A pause however desirable is unlikely: p(pause) = 0.01”
  • “I am confident in saying this – pause is not an option. It is actually impossible.”
  • “There are several organisations working on AI development and at least some of them are ungovernable [hence a pause can never be global]”.

There’s evidently a large gulf behind the figure of 2/3 that I suggested for P(pause), and the views of these clearly intelligent respondents.

Why a pause isn’t that inconceivable

I’ll start my argument on this topic by confirming that I see this discussion as deeply important. Different viewpoints are welcome, provided they are held thoughtfully and offered honestly.

Next, although it’s true that some organisations may appear to be ungovernable, I don’t see any fundamental issue here. As I said online,

“Given sufficient public will and/or political will, no organisation is ungovernable.”

Witness the compliance by a number of powerful corporations in both China and the US to control measures declared by national governments.

Of course, smaller actors and decentralized labs pose enforcement challenges, but these labs are less likely to be able to marshal sufficient computing capabilities to be the first to reach breakthrough new levels of capability, especially if decentralised monitoring of dangerous attributes is established.

I’ve drawn attention on previous occasions to the parallel with the apparent headlong rush in the 1980s toward nuclear weapons systems that were ever more powerful and ever more dangerous. As I explained at some length in the “Geopolitics” chapter of my 2021 book Vital Foresight, it was an appreciation of the horrific risks of nuclear winter (first articulated in the 1980s) that helped to catalyse a profound change in attitude amongst the leadership camps in both the US and the USSR.

It’s the wide recognition of risk that can provide the opportunity for governments around the world to impose an effective pause in the headlong rush toward AGI. But that’s only one of five steps that I believe are needed:

  1. Awareness of catastrophic risks
  2. Awareness of bottlenecks
  3. Awareness of mechanisms for verification and control
  4. Awareness of profound benefits ahead
  5. Awareness of the utility of incremental progress

Here are more details about these five steps I envision:

  1. Clarify in an undeniable way how superintelligent AIs could pose catastrophic risks of human disaster within just a few decades or even within years – so that this topic receives urgent high-priority public attention
  2. Highlight bottlenecks and other locations within the AI production pipeline where constraints can more easily be applied (for example, distribution of large GPU chip clusters, and the few companies that are providing unique services in the creation of cutting-edge chips)
  3. Establish mechanisms that go beyond “trust” to “trust and verify”, including robust independent monitors and auditors, as well as tamperproof remote shut-down capabilities
  4. Indicate how the remarkable benefits anticipated for humanity from aspects of superintelligence can be secured, more safely and more reliably, by applying the governance mechanisms of points 2 and 3 above, rather than just blindly trusting in a no-holds-barred race to be the first to create superintelligence
  5. Be prepared to start with simpler agreements, involving fewer signatories and fewer control points, and be ready to build up stronger governance processes and culture as public consensus and understanding moves forward.

Critics can assert that each of these five steps is implausible. In each case, there are some crunchy discussions to be had. What I find dangerous, however, isn’t when people disagree with my assessments on plausibility. It’s when they approach the questions with what seems to be

  • A closed mind
  • A tribal loyalty to their perceived online buddies
  • Overconfidence that they already know all relevant examples and facts in this space
  • A willingness to distract or troll, or to offer arguments not in good faith
  • A desire to protect their flow of income, rather than honestly review new ideas
  • A resignation to the conclusion that humanity is impotent.

(For analysis of a writer who displays several of these tendencies, see my recent blogpost on the book More Everything Forever by Adam Beck.)

I’m not saying any of this will be easy! It’s probably going to be humanity’s hardest task over our long history.

As an illustration of points worthy of further discussion, I offer this diagram that highlights strengths and weakness of both the “governance” and “pause” approaches:

DimensionGovernance (Continue AGI Development with Oversight)Pause (Moratorium on AGI Development)
Core StrategyImplement global rules, standards, and monitoring while AGI is developedImpose a temporary but enforceable pause on new AGI-capable systems until safety can be assured
AssumptionsGovernance structures can keep pace with AI progress;
Compliance can be verified
Public and political will can enforce a pause;
Technical progress can be slowed
BenefitsEncourages innovation while managing risks;
Allows early harnessing of AGI for societal benefit;
Promotes global collaboration mechanisms
Buys time to improve safety research;
Reduces risk of premature, unsafe AGI;
Raises chance of achieving Beneficial General Intelligence (BGI) instead of CGI
RisksGovernance may be too slow, fragmented, or under-enforced;
Race dynamics could undermine agreements;
Possibility of catastrophic failure despite regulation
Hard to achieve global compliance;
Incentives for “rogue” actors to defect, in the absence of compelling monitoring;
Risk of stagnation or loss of trust in governance processes
Implementation ChallengesRequires international treaties;
Robust verification and auditing mechanisms;
Balancing national interests vs. global good
Defining what counts as “AGI-capable” research;
Enforcing restrictions across borders and corporations;
Maintaining pause momentum without indefinite paralysis
Historical AnalogiesNuclear Non-Proliferation Treaty (NPT);
Montreal Protocol (ozone layer);
Financial regulation frameworks
Nuclear test bans;
Moratoria on human cloning research;
Apollo program wind-down (pause in space race intensity)
Long-Term Outcomes (if successful)Controlled and safer path to AGI;
Possibility of Sustainable Superabundance but with higher risk of misalignment
Higher probability of reaching Sustainable Superabundance safely, but risks innovation slowdown or “black market” AGI

In short, governance offers continuity and innovation but with heightened risks of misalignment, whereas a pause increases the chances of long-term safety but faces serious feasibility hurdles.

Perhaps the best way to loosen attitudes, to allow a healthier conversation on the above points and others arising, is exposure to a greater diversity of thoughtful analysis.

And that brings me back to Global Governance of the Transition to Artificial General Intelligence by Jerome Glenn.

A necessary focus

Jerome’s book contains his personal stamp all over. His is a unique passion – that the particular risks and issues of AGI should not be swept into a side-discussion about the risks and issues of today’s AI. These latter discussions are deeply important too, but time and again, they result in existential questions about AGI being kicked down the road for months or even years. That’s something Jerome regularly challenges, rightly, and with vigour and intelligence.

Jerome’s presence is felt all over the book in one other way – he has painstakingly curated and augmented the insights of scores of different contributors and reviewers, including

  • Insights from 55 AGI experts and thought leaders across six major regions – the United States, China, the United Kingdom, Canada, the European Union, and Russia
  • The online panel of 229 participants from the global community around The Millennium Project who logged into a Real Time Delphi study of potential solutions to AGI governance, and provided at least one answer
  • Chairs and co-chairs of the 70 nodes of The Millennium Project worldwide, who provided additional feedback and opinion.

The book therefore includes many contradictory suggestions, but Jerome has woven these different threads of thoughts into a compelling unified tapestry.

The result is a book that carries the kind of pricing normally reserved for academic text books (as insisted by the publisher). My suggestion to you is that you recommend your local library to obtain a copy of what is a unique collection of ideas.

Finally, about my hallucination, mentioned at the start of this review. On double-checking, I realise that Jerome’s statement is actually, “Humanity has never faced a greater intelligence than itself.” The opening paragraph of that introduction continues,

Within a few years, most people reading these words will live with such superior artificial nonhuman intelligence for the rest of their lives. This book is intended to help us shape that intelligence or, more likely, those intelligences as they emerge.

Shaping the intelligence of the AI systems that are on the point of emerging is, indeed, a vital task.

And as Ben Goertzel says in his Foreword,

These are fantastic and unprecedented times, in which the impending technological singularity is no longer the province of visionaries and outsiders but almost the standard perspective of tech industry leaders. The dawn of transformative intelligence surpassing human capability – the rise of artificial general intelligence, systems capable of reasoning, learning, and innovating across domains in ways comparable to, or beyond, human capabilities – is now broadly accepted as a reasonably likely near-term eventuality, rather than a vague long-term potential.

The moral, social, and political implications of this are at least as striking as the technological ones. The choices we make now will define not only the future of technology but also the trajectory of our species and the broader biosphere.

To which I respond: whether we make these choices well or badly will depend on which aspects of humanity we allow to dominate our global conversation. Will humanity turn out to be its own worst enemy? Or its own best friend?

Postscript: Opportunity at the United Nations

Like it or loathe it, the United Nations still represents one of the world’s best venues where serious international discussion can, sometimes, take place on major issues and risks.

From 22nd to 30th September, the UNGA (United Nations General Assembly) will be holding what it calls its “high-level week”. This includes a multi-day “General Debate”, described as follows:

At the General Debate – the annual meeting of Heads of State and Government at the beginning of the General Assembly session – world leaders make statements outlining their positions and priorities in the context of complex and interconnected global challenges.

Ahead of this General Debate, the national delegates who will be speaking on behalf of their countries have the ability to recommend to the President of the UNGA that particular topics be named in advance as topics to be covered during the session. If the advisors to these delegates are attuned to the special issues of AGI safety, they should press their representative to call for that topic to be added to the schedule.

If this happens, all other countries will then be required to do their own research into that topic. That’s because each country will be expected to state its position on this issue, and no diplomat or politician wants to look uninformed. The speakers will therefore contact the relevant experts in their own country, and, ideally, will do at least some research of their own. Some countries might call for a pause in AGI development if it appears impossible to establish national licensing systems and international governance in sufficient time.

These leaders (and their advisors) would do well to read the report recently released by the UNCPGA entitled “Governance of the Transition to Artificial General Intelligence (AGI): Urgent Considerations for the UN General Assembly” – a report which I wrote about three months ago.

As I said at that time, anyone who reads that report carefully, and digs further into some of the excellent of references it contains, ought to be jolted out of any sense of complacency. The sooner, the better.

30 July 2025

The most dangerous book about AI ever written

Filed under: AGI, books, Singularity — Tags: , , , — David Wood @ 10:32 pm

Strawmen. Non-sequiturs. Sleight of hand. Face-palm shockers. This book is full of them.

As such, it encourages a disastrously complacent attitude toward the risks posed by forthcoming new AI systems.

The number of times I shouted (aloud, or just in my head) to the narrator, appalled at what I had just heard, when yet another devious distortion reached my ears, far exceeds anything I remember from any previous book.

Ordinarily, I would have set the book aside, long before finishing it, in order to make more productive use of my limited time. But in this case, I was aware that many other readers have seemingly been taken in by all the chicanery in this book: witness its average Goodreads rating of 4.29 stars out of 5, from 466 ratings, at the time I am writing this blogpost. And from sampling some of the reviews, it’s clear that the book satisfies a psychological hunger present in many of its readers – a hunger to be scornful of some of the world’s wealthiest people.

What makes the book particularly dangerous is the way that it weaves its horrendous falsehoods into a narrative with plenty of juicy content. That’s how it lures readers into accepting its most egregious conclusions. Readers get swept along on a kind of feel-good public condemnation of larger-than-life villains. Since these villains tell people that AI is going to become more and more capable, that idea gets walloped too. Let’s hold these villains in contempt – and likewise hold in contempt their self-aggrandising ideas about AI superintelligence. Yah boo!

Thus, the book highlights the shortcomings of some of the world’s most famous entrepreneurs and technology leaders; more than that, it builds a narrative that, if these people (among them, Marc Andreessen, Jeff Bezos, Elon Musk, and Sam Altman) continue to acquire more power, it will likely have very bad consequences for the bulk of humanity. That’s because:

  • These apparent titans over-estimate their own abilities, especially outside of their original domains of expertise
  • They have deeply naïve expectations about how easy it will be for humanity to set up self-supporting colonies on other planets
  • They are prone to a kind of self-righteous moral certainty which rides roughshod over the concerns of numerous critics.

That part of the narrative is correct. I give it three cheers. But where the book goes wildly wrong is in its associated narrative about not needing to be concerned with the emergence of AI which exceeds the understanding and control of its human designers. The way the book defends its wrong conclusions about AI is by setting up strawmen – weak imitations of the real concerns about AI superintelligence – and then pointing out flaws in these strawmen.

Motivations

I’ll come to these strawmen shortly. But first, I’ll express a bit more sympathy for at least part of what Adam Becker, the author of this book, is trying to do. He explains his motivation in a recent Singularity.FM interview with Nikola Danaylov:

Becker’s previous career followed a path in which I was personally also very interested at a similar stage in my life: a fascination with cosmology and theoretical physics. That evolved into a passion (which, again, I share) for clear communications about the meaning and implications of science. Becker’s first book, What is Real? The Unfinished Quest for the Meaning of Quantum Physics addresses the same topics on which I focussed for four years in the History and Philosophy of Science department in Cambridge in the mid 1980s. I’ve not read that book (yet), but based on various reviews, I believe I would agree with Becker’s main conclusions in that book.

Becker’s first concept for what he should write about in his second book also gets a thumbs up from me: evidence that many tech leaders in Silicon Valley have flawed views about key aspects of science – including flawed views about biology, psychology, and sociology, as well as the physics of space travel.

As Becker explained in his Singularity.FM interview, his ideas evolved further as he tried to write his second book. His scope widened to include analyses of some of the philosophical ideas which influence many of the movers and shakers in Big Tech – ideas such as longtermism, advocated by the Oxford philosopher Will MacAskill. I share with Becker a distaste for some of the conclusions of longtermism, though I’m less convinced that Becker provides adequate rebuttals to the longtermist argumentation. (Throughout the book, when analysing philosophical positions, Becker ladles on the critical whining naysaying, but he offers little as an alternative worldview, beyond very empty generalities.)

But where I absolutely part company with Becker is in my assessment of the idea of a potential forthcoming Technological Singularity, triggered by AI becoming increasingly capable. Becker roundly and freely condemns that idea as “unscientific”, “specious”, “imaginary”, and “hypothetical”.

Strawmen

Becker’s basic narrative is this: AI superintelligence will require a complete understanding of the human brain and a complete copying of what’s happening in the brain, down to a minute level. However, we’re still a long way from obtaining that understanding. Indeed, there are now reasons to believe that significant computation is taking place inside individual neurons (beyond a simple binary summation), and that various other types of braincell also contribute to human intelligence. Moreover, little progress has been made in recent years with brain scanning.

Now, this view of “understand the human brain first and copy that precisely” might have been the view of some AI researchers in the past, but since the revolutions of Deep Neural Networks (2012+) and Transformers (2018+), it’s clear that humanity could create AI with very dangerous capabilities without either of these preconditions. It’s more accurate to say that these AIs are being grown rather than being built. They are acquiring their capabilities via emergence rather than via detailed specification. To that extent, the book is stuck in the past.

These new AIs may or may not have all the same thinking processes that take place inside the human brain. They may or may not have aspects of what we call consciousness. That’s beside the point. What matters is whether the AI gains capabilities in observing, predicting, planning interventions, and learning from the results of its predictions and interventions. It is these capabilities that give AI its increasing power to intervene in the world.

This undermines another of the strawmen in Becker’s extensive collection – his claim that ideas of AI superintelligence wrongly presuppose that all intelligence can be reduced to a single parameter, ‘g’, standing for general intelligence. On the contrary, what matters is whether AI will operate outside of human understanding and human control. That’s already nearly happening. Yet Becker prefers to reassure his readers with some puffed up philosophising. (I lost track of the number of times he approvingly quoted cognitive scientists who seemingly reassured him that intelligence was too complicated a subject for there to be any worry about AI causing a real-world catastrophe.)

It’s like a prehistoric group of chimpanzees thinking to themselves that, in various ways, their individual capabilities exceed the corresponding capabilities of individual humans. Their equivalent of Adam Becker might say, “See, there’s no unified ‘h’ parameter for all the ways that humans allegedly out-perform chimpanzees. So don’t worry chaps, we chimpanzees will remain in control of our own destiny, and humans will forever remain as just weird naked apes.”

The next strawman is the assumption that the concern about out-of-control AI depends upon the maintenance of smooth exponential progress curves. Astonishingly, Becker devotes numerous pages to pointing out ways that exponential trends, such as Moore’s Law, slow down or even stop. This leads him to assert that “AI superintelligence is imaginary”. But the real question is: is more progress possible than we have already reached? In more detail:

  • Can more efficient computational hardware be invented? (Answer: yes, including new types of chips dedicated to new kinds of AI.)
  • Can extra data be fed into AI training? (Answer: yes, including cleverly constructed synthetic data.)
  • Can new architectures, beyond transformers, be introduced? (Answer: yes, and AI researchers are pursuing numerous possibilities.)
  • Can logical reasoning, such as chain-of-thought, be combined in productive new ways with existing neural networks? (Answer: yes, this is happening daily.)
  • Is there some fundamental reason why the human brain is the ultimate apex of the skills of predicting, planning interventions, and learning? (Answer: no, unless you are a believer in six day creationism, or something equivalent.)

So, what matters isn’t the precise shape of the pace of progress. What matters is whether that progress can reach a point that enables AIs to improve the process of designing further AIs. That’s the tipping point which will introduce huge new uncertainty.

Becker tries to head off arguments that forthcoming new types of hardware, such as quantum computing, might bring AI closer. Quantum computing, as presently understood, isn’t suited to all computational tasks, he points out. But wait: the point is that it can significantly accelerate some computational tasks. AI can improve through smarter combinations of old-style hardware and new-style hardware. We don’t need to take Becker’s simplistic one-design-only approach as the end of the argument.

The slowdown in reaching new generations of traditional semiconductor chips does not mean the end of the broader attainment of wide benefits from improved hardware performance. Instead, AI progress now depends on how huge numbers of individual chips can be networked together. Moreover, with more hardware being available at lower cost, more widely distributed, this enables richer experimentation with new algorithms and new software architectures, thereby making yet more new AI breakthroughs more likely. Any idea that breakthroughs in AI have come to a brick wall is preposterous.

Next, Becker returns repeatedly to the strawman that the kinds of threats posed by AI superintelligence are just hypothetical and are far removed from our previous experience. Surely an AI that is able to self-introspect will be kinder, he argues. However, humans who are more intelligent – including having the ability to analyse their own thought processes – are by no means necessarily kinder. They may be psychopaths. Likewise, advanced AIs may be psychopaths – able to pretend concern for human wellbeing while that tactic suits them, but ready to incapacitate us all when the opportunity arises.

Indeed, the threats posed by ever more powerful AI are relatively straightforward extrapolations of dangers posed by existing AI systems (at the hands of human users who are hateful or naïve or resentful or simply out-of-their-depth). There’s no need to make any huge jump of imagination. That’s an argument I spell out in this Mindplex article.

Yet another strawman in the book is the idea that the danger-from-advanced-AI argument needs to be certain, and that it can be rejected if any uncertainty remains about it. Thus, when Becker finds AI safety advocates who are unwilling to pin down a precise probability for the likelihood of an AI-induced catastrophe, he switches from “uncertain about the chance of doom” to “unconcerned about the chance of doom”. When two different apparent experts offer opposing views on the likelihood of AI-induced doom, he always prefers the sceptic, and rushes to dismiss the other side. (Is he really so arrogant to think he has a better grasp of the possibilities of AI-induced catastrophe than the international team of experts assembled by Yoshua Bengio? Apparently, yes he is.)

One final outrageous tactic Becker uses to justify disregarding someone’s view is to point out a questionable claim that person has made in another area. Thus, Nick Bostrom has made some shocking statements about the difference in abilities between people of different races. Therefore, all Bostrom’s views about the dangers of AI superintelligence can be set aside. Elon Musk naively imagines it will be relatively easy to terraform Mars to make it suitable for human habitation. Therefore, all Musk’s views about the dangers of AI superintelligence can, again, be set aside. You get the picture.

Constructive engagement

Instead of scorning these concerns, Becker should be engaging constructively with the community of thoughtful people who are (despite adverse headwinds) painstakingly exploring ways to get the best out of AI whilst avoiding the risks of catastrophe. This includes the Singapore Consensus, the Future of Life Institute, the Council of Presidents of the United Nations General Assembly, Control AI, Pause AI, The Millennium Project, AI Safety, the Kira Center, the Machine Intelligence Research Institute, the Center for AI Safety Research, the Centre for the Governance of AI, the Center for Human Compatible AI, the Leverhulme Centre for the Future of Intelligence, my own book “The Singularity Principles”, and much more.

That kind of constructive engagement might not lead to as many juicy personal anecdotes as Becker sprinkles throughout More Everything Forever, but it would provide much better service to humanity.

Conversely, you might ask: aren’t there any lessons for me (and other AI safety activists) in the light of the shortcomings highlighted by Becker in the thoughts and actions of many people who take the idea of the Technological Singularity seriously? Shouldn’t I be grateful to Becker for pointing out various predictions made by Ray Kurzweil which haven’t come to pass, the casual attitudes seemingly displayed by some singularitarians toward present-day risks arising from abuses of existing technology (including the ongoing emissions of greenhouse gases), the blatant links between the 2023 Techno-Optimist Manifesto of Marc Andreessen and the proto-fascist 1909 Futurist Manifesto of Filippo Marinetti, and so on?

My answer: yes, but. Almost nothing in Becker’s book was new for me. I have since 2021 frequently given presentations on the subject of “The Singularity Shadow” (the concept first appeared in my book Vital Foresight) – a set of confusions and wishful thinking which surrounds the subject of the Technological Singularity:

These confusions and wishful thinking form a kind of shadow around the central concept of the Technological Singularity – a shadow which obstructs a clearer perception of the risks and opportunities that are actually the most significant.

The Singularity Shadow misleads many people that should know better. That shadow of confusion helps to explain why various university professors of the subject of artificial intelligence, along with people with job titles such as “Head of AI” in large companies, often make statements about the likely capabilities of forthcoming new AI platforms that are, frankly, full of errors or deeply misleading.

I describe that shadow as consisting of seven overlapping areas:

  1. Singularity timescale determinism
  2. Singularity outcome determinism
  3. Singularity hyping
  4. Singularity risk complacency
  5. Singularity term overloading
  6. Singularity anti-regulation fundamentalism
  7. Singularity preoccupation

To be clear, there is a dual problem with the Singularity Shadow:

  • People within the shadow – singularity over-enthusiasts – make pronouncements about the Singularity that are variously overly optimistic, overly precise, or overly vague
  • People outside the shadow – singularity over-critics – notice these instances of unwarranted optimism, precision, or vagueness, and jump to the wrong conclusion that the entire field of discussion is infected with the same flaws.

Here’s a video that reviews the seven areas in the Singularity Shadow, and the damage this Shadow causes to thoughtful discussions about both the opportunities and the threats arising from the Singularity:

And if you want to follow the conversation one more step, this video looks more deeply at the reasons why people (such as Becker) are so insistent that the Singularity is (in his words) “unscientific”, “specious”, “imaginary”, and “hypothetical”:

That’s the ‘but’ part of my “yes, but” answer. The ‘yes’ part is that, yes, I need to reflect: after so many years of trying to significantly improve the conversation about both the opportunities and risks of the Singularity, the public conversation about it is still often dominated by Becker-style distractions and confusions.

Clearly, I need to up my game. We all need to up our game.

AGW and AGI

I’ll finish with one point of consensus: Becker is highly critical, in his book, of people who use their intelligence to deny the risks of accelerated global warming (AGW). Becker, like me, sees these risks as deeply concerning. We are both dismayed when evidently clever people come up with deceptive arguments to avoid taking climate change seriously. The real risk here isn’t of linear climate change, but rather of the climate reaching thresholds known as tipping points, where greater heat leads to dramatic changes in the earth’s ecosystem that result in even greater heat. Sudden changes in temperature, akin to that just described, can be observed in ancient geological transition points.

It’s the unpredictability of what happens at these tipping points – and the uncertainty over where these tipping points are located – that means humanity should be doubling down, hard, on reversing our greenhouse gas emissions. (The best book I’ve read on this topic recently, by the way, is A Climate of Truth, by Mike Berners-Lee. I unhesitatingly recommend it.)

Yet despite these risks, AGW deniers argue as follows: there is plenty of uncertainty. There are lots of different ways of measuring temperature. There are lots of different forecasts. They don’t all agree. That means we have plenty of time to work out solutions. In the meantime, inaction is fine. (Face palm!)

I’ve spelt this out, because Becker is equally guilty. He’s not an AGW denier, but an AGI denier – denying that we need to pay any serious attention to the risks of Artificial General Intelligence. There is plenty of uncertainty about AGI, he argues. Disagreement about the best way to build it. No uniform definition of ‘g’, general intelligence. No agreement on future scenarios. Therefore, we have plenty of time to work out how to deal with any hypothetical future AGI. (Face palm again!)

Actually, this is not just a matter of a face palm. It’s a matter of the utmost seriousness. The unpredictability makes things worse, not better. Becker has allowed his intelligence to be subverted to obscure one of the biggest risks facing humanity. And because he evidently has an audience that is psychologically predisposed to lap up his criticism of Silicon Valley leaders, the confusion he peddles is likely to spread significantly.

It’s all the more reason to engage sincerely and constructively with the wider community who are working to ensure that advanced AI turns out beneficial (a “BGI”) instead of catastrophic (a “CGI”).

3 April 2025

Technology and the future of geopolitics

Filed under: AGI, books, risks — Tags: , , , , , , — David Wood @ 12:28 pm

Ahead of last night’s London Futurists in the Pub event on “Technology and the future of geopolitics”, I circulated a number of questions to all attendees:

  • Might new AI capabilities upend former geopolitical realities, or is the potential of AI overstated?
  • What about surveillance, swarms of drones, or new stealth weapons?
  • Are we witnessing a Cold War 2.0, or does a comparison to the first Cold War mislead us?
  • What role could be played by a resurgent Europe, by the growing confidence of the world’s largest democracy, or by outreach from the world’s fourth most populous country?
  • Alternatively, will technology diminish the importance of the nation state?

I also asked everyone attending to prepare for an ice-breaker question during the introductory part of the meeting:

  • What’s one possible surprise in the future of geopolitics?

As it happened, my own experience yesterday involved a number of unexpected surprises. I may say more about these another time, but it suffices for now to mention that I spent much more time than anticipated in the A&E department of a local hospital, checking that there were no complications in the healing of a wound following some recent minor surgery. By the time I was finally discharged, it was too late for me to travel to central London to take part in the event – to which I had been looking forward so eagerly. Oops.

(Happily, the doctors that I eventually spoke to were reassuring that my wound would likely heal of its own accord. “We know you were told that people normally recover from this kind of operation after ten days. Well, sometimes it takes up to six weeks.” And they prescribed an antibiotic cream for me, just in case.)

I offer big thanks to Rohit Talwar and Tony Czarnecki for chairing the event in the pub in my absence.

In the days leading up to yesterday, I had prepared a number of talking points, ready to drop into the conversation at appropriate moments. Since I could not attend in person, let me share them here.

Nuclear war: A scenario

One starting point for further discussion is a number of ideas in the extraordinary recent book by Annie Jacobsen, Nuclear War: A Scenario.

Here’s a copy of the review I wrote a couple of months ago for this book on Goodreads:

Once I started listening to this, I could hardly stop. Author and narrator Annie Jacobsen amalgamates testimonies from numerous experts from multiple disciplines into a riveting slow-motion scenario that is terrifying yet all-too-believable (well, with one possible caveat).

One point that comes out loud and clear is the vital importance of thoughtful leadership in times of crisis – as opposed to what can happen when a “mad king” takes decisions.

Also worth pondering are the fierce moral contradictions that lie at the heart of the theory of nuclear deterrence. Humans find their intuitions ripped apart under these pressures. Would an artificial superintelligence fare any better? That’s by no means clear.

(I foresee scenarios when an ASI could decide to risk a pre-emptive first strike, on behalf of the military that deployed it – under the rationale that if it fails to strike first, an enemy ASI will beat it to the punch. That’s even if humans programmed it to reject such an idea.)

Returning to the book itself (rather than my extrapolations), “Nuclear War: A scenario” exemplifies good quality futurism: it highlights potential chains of future causes and effects, along with convergences that complicate matters, and challenges all of us: what actions are needed avoid these horrific outcomes?

Finally, two individual threats that seem to be important to learn more about are what the author reports as being called “the devil’s scenario” and “the doomsday scenario”. (Despite the similarity in naming, they’re two quite different ideas.)

I don’t want to give away too many spoilers about the scenario in Jacobsen’s book. I recommend that you make the time to listen to the audio version of the book. (Some reviewers have commented that the text version of the book is tedious in places, and I can understand why; but I found no such tedium in the audio version, narrated by Jacobsen herself, adding to the sense of passion and drama.)

But one key line of thinking is as follows:

  • Some nations (e.g. North Korea) may develop new technologies (e.g. cyberhacking capabilities and nuclear launch capabilities) more quickly than the rest of the world expects
  • This would be similar to how the USSR launched Sputnik in 1957, shocking the West, who had previously been convinced that Soviet engineering capabilities lagged far behind that of muscular western capitalism
  • The leaders of some nations (e.g. North Korea, again) may feel outraged and embarrassed by criticisms of their countries made by various outsiders
  • Such a country might believe they have obtained a technological advantage that could wipe out the ability of their perceived enemies to retaliate in a second strike
  • Seeing a short window of opportunity to deploy what they regard as their new wonder weapon, and being paranoid about consequences should they miss this opportunity, they may press ahead recklessly, and tip the planet fast forward into Armageddon.

Competence and incompetence

When a country is struck by an unexpected crisis – such as an attack similar to 9/11, or the “Zero Day” disaster featured in the Netflix series of that name – the leadership of the country will be challenged to demonstrate clear thinking. Decisions will need to be taken quickly, but it will be still be essential for competent, calm heads to prevail.

Alas, in recent times, a number of unprecedentedly unsuitable politicians have come into positions of great power. Here, I’m not talking about the ideology or motivation of the leader. I’m talking about whether they will be able to take sensible decisions in times of national crisis. I’m talking about politicians as unhinged as

  • One recent British Prime Minister, who managed to persuade members of her political party that she might be a kind of Margaret Thatcher Mk 2, when in fact a better comparison was with a lettuce
  • The current US President, who has surrounded himself by a uniquely ill-qualified bunch of clowns, and who has intimidated into passive acquiescence many of the more sensible members of the party he has subverted.

In the former case, the power of the Prime Minister in question was far from absolute, thankfully, and adults intervened to prevent too much damage being done. In the latter case, the jury is still out.

But rather than focus on individual cases, the broader pattern deserves our attention. We’re witnessing a cultural transformation in which

  • Actual expertise is scorned, and conspiracy merchants rise in authority instead
  • Partisan divisions which were manageable in earlier generations are nowadays magnified to horrifically hateful extent by an “outrage industrial complex” that gains its influence from AI algorithms that identify and inflame potential triggers of alienation

The real danger is if there is a convergence of the two issues I’ve listed:

  • A rogue state, or a rogue sub-state, tries to take advantage of new technology to raise their geopolitical power and influence
  • An unprecedentedly incompetent leader of a major country responds to that crisis in ways that inflame it rather than calm it down.

The ethics of superintelligence

Actually, an even bigger danger occurs if one more complication is added to the mix: the deferment of key decisions about security and defence to a system of artificial intelligence.

Some forecasters fondly imagine that the decisions taken by AIs, in the near future, will inevitably be wiser and more ethical than whatever emerges from the brains of highly pressurised human politicians. Thus, these forecasters look forward to human decision-making being superseded by the advanced rationality of an AGI (Artificial General Intelligence).

These forecasters suggest that the AGI will benefit decisively from its survey of the entirety of great human literature about ethics and morality. It will perceive patterns that transcend current human insights. It will guide human politicians away from treacherous paths into sustainable collaborations. Surely, these forecasters insist, the superintelligence will promote peace over war, justice over discrimination, truthfulness over deception, and reconciliation over antagonism.

But when I talk to forecasters of that particular persuasion, I usually find them to be naïve. They take it for granted that there is no such thing as a just war, that it’s everyone’s duty to declare themselves a pacifist, that speaking an untruth can never be morally justified, and that even to threaten a hypothetical retaliatory nuclear strike is off-the-charts unethical. Alas, although they urge appreciation of great human literature, they seem to have only a shallow acquaintance with the real-life moral quandaries explored in that literature.

Far from any conclusion that there is never an ethical justification for wars, violence, misinformation, or the maintenance of nuclear weapons, the evidence of intense human debate on all these topics is that things are more complicated. If you try to avoid war you may actually precipitate one. If you give up your own nuclear arsenal, it may embolden enemies to deploy their own weaponry. If you cry out “disarm, disarm, hallelujah”, you may prove to be a useful idiot.

Therefore, we should avoid any hopeful prediction that an advanced AI will automatically abstain from war, violence, misinformation, or nuclear weaponry. As I said, things are more complicated.

It’s especially important to recognise that, despite exceeding human rationality in many aspects, superintelligences may well make mistakes in novel situations.

My conclusion: advanced AI may well be part of solutions to better geopolitics. But not if that AI is being developed and deployed by people who are naïve, over-confident, hurried, or vainglorious. In such circumstances, any AGI that is developed is likely to prove to be a CGI (catastrophic general intelligence) than a BGI (beneficial general intelligence).

Aside: to continue to explore the themes of this final section of this article, take a look at this recent essay of mine, “How to build BGIs rather than CGIs”.

6 November 2024

Not democracy’s finest hour

Filed under: books, culture, philosophy, politics, sustainability — Tags: , , — David Wood @ 10:04 am

November 5th and 6th, 2024, is not democracy’s finest hour.

Whether you are a supporter of Donald Trump or a supporter of Kamala Harris, you cannot be happy at the calibre of the public discussion during the recent US presidential race.

That discussion has been dominated by distortions, by polarisation, by superficialities, and by misleading oversimplifications.

Any nation that is unable to have an honest, open, conciliatory discussion about the key issues of social wellbeing, is likely to be devastated by the strong winds of disruption that are imminent. It is likely to be overwhelmed by literal floods and fires arising from climate change. And likely to become easy pickings for its enemies both at home and overseas.

To quote from the Sermon on the Mount in the gospel according to Matthew, a house built on sand cannot stand. A house built upon superficiality will fall.

For a sustainable future, we need more solid foundations. We need a fuller shared understanding of the basis of human flourishing – an understanding that respects multiple perspectives and multiple lived experiences. We need a deeper societal agreement about the things that matter most. We need bonds of mutual support, that enable each of us to become better versions of ourselves.

To call a future a sustainable future doesn’t mean there will be no change. Change is fundamental to human life, and we must anticipate that a great deal more change lies ahead – changes in technology, demographics, ideas, standards, and narratives.

Similarly, a sustainable future doesn’t mean there will be no disagreements. But it requires people not to be constantly disagreeable, or contemptuous.

To help us build a sustainable future that can thrive on change rather than collapsing, and where disagreements are constructive rather than contemptuous, whither can we turn?

As someone who spent 25 years immersed in the technology industry, my first instinct is to suggest that we should turn to technology. However, the USA is already awash with technology. The companies that are at the forefront of the global race toward AI superintelligent are headquartered in the US. That surfeit of technology has by no means translated into better democracy.

My next instinct, as someone with a strong personal interest in philosophy, is to suggest that we need to encourage more people to appreciate the insights of that field. Instead of being swept along by rip-roaring tides of “we can” and “we want to”, we need to spend more time pondering “we should” – more time considering alternative scenarios for how the future might unfold.

But even before people are willing to look at alternative possibilities, there needs to be a softening of the spirit.

So my biggest personal takeaway, overnight, is that I should stop looking with hostility or contempt at the vast numbers of people who have reached different decisions, from me, about (for example) which person should become the President of the USA. For that reason, I have resolved to spend some time over the next few days listening to the audio of the 2019 book by Arthur C. Brooks, Love Your Enemies: How Decent People Can Save America from the Culture of Contempt.

(The book has the word ‘America’ in its title, but I feel sure its messages apply elsewhere in the world too.)

Earlier this morning, the following sentence from the book’s opening chapter struck me hard: “We need national healing every bit as much as economic growth”.

That’s a good start!

(I prepared the remarks above to share with attendees at a private roundtable conversation this evening, arranged long before the US election took place – a conversation with the topic “How can we harness emerging technology to improve our democracies?”)

1 January 2023

Enabling rethinking

Filed under: books, change, communications, psychology, retrospection — Tags: , — David Wood @ 10:45 pm

At the start of a new year, it’s customary for people to reflect on their life trajectories. Are the visions, attitudes, activities, and alliances, that have brought us to the end of one year, the best set to keep following in the next year?

So, new years are known for retrospectives – reviews of our past successes and disappointments – and for new resolutions – initiatives to change parts of our lifestyles.

My view, however, is that the pace of change in society is so rapid nowadays that we can no longer keep our rethinking – our retrospectives and our new resolutions – to something that just happens occasionally – perhaps once a year.

Moreover, the wide scope of change in society demands a rethinking that is not only more frequent, but also more searching, more radical, and more effective.

Accordingly, perhaps the single most important skill today is the ability to unlearn and relearn, quickly and successfully.

That’s why I put “Learning how to learn” as the very first area (out of 24) in the Vital Syllabus project which I oversee.

It’s also why my attention was drawn to the book Think Again by organizational psychologist Adam Grant.

I was intrigued by the subtitle “The power of knowing what you don’t know”, and by the recommendation on the book cover, “Guaranteed to make you rethink your opinions and your most important decisions”.

I downloaded the audio version of the book to my phone a couple of weeks ago, and started listening to it. I finished it earlier today. It was a great choice to be my last listen of the year.

It’s narrated by the author himself. I see from his biography that he “has been Wharton’s top-rated professor for seven straight years”. After listening to his narration, I’m not at all surprised.

The chapters all involve engaging narratives with several layers of meaning which become clearer as the book progresses. Since I’m widely read myself, the narratives several times touched on material where I had some prior familiarity:

  • Learning from “superforecasters”
  • Learning from the Wright brothers (aviation pioneers)
  • Learning from expert negotiators
  • Learning from debating champions
  • Learning from the tragedies at NASA
  • Learning from the Dunning-Kruger analysis of overconfidence.

But in each case, the author drew attention to extra implications that I had not appreciated before. I’m considerably wiser as a result.

My own specific takeaways from the book are a number of new habits I want to include in my personal reflections and interactions. I see the book as enabling better rethinking:

  • Individually – when I contemplate my own knowledge, skills, and, yes, limitations and doubts
  • Interpersonally – when I bump up against other people with different beliefs, opinions, and loyalties
  • Within communities – so that an organisation is better able to avoid groupthink and more able to successfully pivot when needed
  • Transcending polarisation – by highlighting the complexities of real-world decisions, by encouraging “dancing” rather than conflicts, by expressing a wider range of emotions, and much more.

Because polarisation is such a challenging issue in today’s world, especially in politics (see my comments in this previous book review), the methods Grant highlights are particularly timely.

I’ve already found a couple of videos online that cover some of these points, and added them into various pages of the Vital Syllabus (here and here, so far). I’m sure there’s a lot more material out there, which should likewise be included.

If you have any additional suggestions, don’t hesitate to let me know!

17 December 2022

The best book I read in 2022

Filed under: books — Tags: , , — David Wood @ 12:55 pm

I’ve checked the records I’ve created over the year in Goodreads. I see that, out of the books I read all the way through in 2022, I was inspired to give sixteen the maximum Goodreads rating of five stars out of five.

(Actually mainly I listened to these books, as audio books, rather than read them.)

You can see their covers in the following image.

(Click on the image to enlarge it, to view the individual book covers more clearly.)

Each of these books gave me plenty to think about. I’m grateful in every case for the effort and inspiration of the authors.

But one of these books stands out as being even more impressive and impactful than all the others.

It’s The Revenge of Power, by Moisés Naím.

Here are some extracts from the Wikipedia page of the author:

Moisés Naím (born July 5, 1952, in Tripoli, Libya) is a Venezuelan journalist and writer. He is a Distinguished Fellow at the Carnegie Endowment for International Peace. In 2013, the British magazine Prospect listed Naím as one of the world’s leading thinkers. In 2014 and 2015, Dr. Naím was ranked among the top 100 influential global thought leaders by Gottlieb Duttweiler Institute (GDI).

He is the former Minister of Trade and Industry for Venezuela, Director of its Central bank, and Executive Director of the World Bank.

Naím studied at the Universidad Metropolitana in Caracas, Venezuela. Following his undergraduate studies, he attended the Massachusetts Institute of Technology, where he obtained both a master of science and doctorate degrees.

Naím was a professor of business strategy and industrial economics at Instituto de Estudios Superiores de Administración (IESA), Venezuela’s leading business school and research center located in Caracas. He also served as its Dean between 1979 and 1986.

Naím served as the editor-in-chief of Foreign Policy magazine for 14 years (1996-2010). Since 2012, he has directed and hosted Efecto Naím, a weekly televised news program on the economy and international affairs that airs throughout the Americas on NTN24. In 2011, he received the Ortega y Gasset Prize for his important contribution to journalism in the Spanish language.

Naím ably deploys that rich experience and expertise in the writing of his new book.

The book’s full title is “The Revenge of Power: The Global Assault on Democracy and How to Defeat It”.

Here are the reasons why the book particularly stands out for me, and why I believe you should read it too:

  • The subject has fundamental global importance. All our aspirations in other areas of life – health, education, sport, travel, technology, art – are subject to destruction if politics falls further into the hands of autocrats
  • Every single chapter was eye-opening, introducing important new material
  • The analysis covers threats from both the right and the left, and is full of captivating details about politics in numerous countries around the world
  • The book draws together its various ideas into a coherent overarching framework – the “three P’s” of populism, polarization and post-truth (you might think at first, like I did, that this sounds a bit trite; but be prepared to change your mind)
  • It clarifies what is different, today, compared to the threats posed by autocrats of previous generations
  • It also clarifies how new technological possibilities – compared to the newspapers and radio and TV of the past – pose further challenges to the maintenance of democracy
  • It vividly explains the concept of “status dissonance” that is one of several factors causing electorates to look favourably at potential autocrats
  • It provides a stirring defence of the principles of the separation of powers, and the maintenance of checks and balances.

Many parts of the book are truly frightening. This is not some abstract issue, nor some far-future concern. As the book highlights, it’s a live here-and-now issue. I confess that several episodes it covered left me hopping mad.

Finally, it has specific recommendations on what needs to be done, to ward off the threats posed to the wellbeing of politics around the world. These recommendations are “five battles we need to win” – against falsehoods, criminalized governments, foreign subversion, political cartels and narratives of illiberalism.

None of these battles will be easy. But they’re all winnable, with sufficient effort, intelligence, and collaboration.

8 June 2022

Pre-publication review: The Singularity Principles

Filed under: books, Singularity, Singularity Principles — Tags: — David Wood @ 9:23 am

I’ve recently been concentrating on finalising the content of my forthcoming new book, The Singularity Principles.

The reasons why I see this book as both timely and necessary are explained in the extract, below, taken from the introduction to the book

This link provides pointers to the full text of every chapter in the book. (Or use the links in the listing below of the extended table of contents.)

Please get in touch with me if you would prefer to read the pre-publication text in PDF format, rather than on the online HTML pages linked above.

At this stage, I will gratefully appreciate any feedback:

  • Aspects of the book that I should consider changing
  • Aspects of the book that you particularly like.

Feedback on any parts of the book will be welcome. It’s by no means necessary for you to read the entire text. (However, I hope you will find it sufficiently interesting that you will end up reading more than you originally planned…)

By the way, it’s a relatively short book, compared to some others I’ve written. The wordcount is a bit over 50 thousand words. That works out at around 260 pages of fairly large text on 5″x8″ paper.

I will also appreciate any commendations or endorsements, which I can include with the publicity material for the book, to encourage more people to pay attention to it.

The timescale I have in mind: I will release electronic and physical copies of the book some time early next month (July), followed up soon afterward by an audio version.

Therefore, if you’re thinking of dipping into any chapters to provide feedback and/or endorsements, the sooner the better!

Thanks in anticipation!

Preface

This book is dedicated to what may be the most important concept in human history, namely, the Singularity – what it is, what it is not, the steps by which we may reach it, and, crucially, how to make it more likely that we’ll experience a positive singularity rather than a negative singularity.

For now, here’s a simple definition. The Singularity is the emergence of Artificial General Intelligence (AGI), and the associated transformation of the human condition. Spoiler alert: that transformation will be profound. But if we’re not paying attention, it’s likely to be profoundly bad.

Despite the importance of the concept of the Singularity, the subject receives nothing like the attention it deserves. When it is discussed, it often receives scorn or ridicule. Alas, you’ll hear sniggers and see eyes rolling.

That’s because, as I’ll explain, there’s a kind of shadow around the concept – an unhelpful set of distortions that make it harder for people to fully perceive the real opportunities and the real risks that the Singularity brings.

These distortions grow out of a wider confusion – confusion about the complex interplay of forces that are leading society to the adoption of ever-more powerful technologies, including ever-more powerful AI.

It’s my task in this book to dispel the confusion, to untangle the distortions, to highlight practical steps forward, and to attract much more serious attention to the Singularity. The future of humanity is at stake.

Let’s start with the confusion.

Confusion, turbulence, and peril

The 2020s could be called the Decade of Confusion. Never before has so much information washed over everyone, leaving us, all too often, overwhelmed, intimidated, and distracted. Former certainties have dimmed. Long-established alliances have fragmented. Flurries of excitement have pivoted quickly to chaos and disappointment. These are turbulent times.

However, if we could see through the confusion, distraction, and intimidation, what we should notice is that human flourishing is, potentially, poised to soar to unprecedented levels. Fast-changing technologies are on the point of providing a string of remarkable benefits. We are near the threshold of radical improvements to health, nutrition, security, creativity, collaboration, intelligence, awareness, and enlightenment – with these improvements being available to everyone.

Alas, these same fast-changing technologies also threaten multiple sorts of disaster. These technologies are two-edged swords. Unless we wield them with great skill, they are likely to spin out of control. If we remain overwhelmed, intimidated, and distracted, our prospects are poor. Accordingly, these are perilous times.

These dual future possibilities – technology-enabled sustainable superabundance, versus technology-induced catastrophe – have featured in numerous discussions that I have chaired at London Futurists meetups going all the way back to March 2008.

As these discussions have progressed, year by year, I have gradually formulated and refined what I now call the Singularity Principles. These principles are intended:

  • To steer humanity’s relationships with fast-changing technologies,
  • To manage multiple risks of disaster,
  • To enable the attainment of remarkable benefits,
  • And, thereby, to help humanity approach a profoundly positive singularity.

In short, the Singularity Principles are intended to counter today’s widespread confusion, distraction, and intimidation, by providing clarity, credible grounds for hope, and an urgent call to action.

This time it’s different

I first introduced the Singularity Principles, under that name and with the same general format, in the final chapter, “Singularity”, of my 2021 book Vital Foresight: The Case for Active Transhumanism. That chapter is the culmination of a 642 page book. The preceding sixteen chapters of that book set out at some length the challenges and opportunities that these principles need to address.

Since the publication of Vital Foresight, it has become evident to me that the Singularity Principles require a short, focused book of their own. That’s what you now hold in your hands.

The Singularity Principles is by no means the only new book on the subject of the management of powerful disruptive technologies. The public, thankfully, are waking up to the need to understand these technologies better, and numerous authors are responding to that need. As one example, the phrase “Artificial Intelligence”, forms part of the title of scores of new books.

I have personally learned many things from some of these recent books. However, to speak frankly, I find myself dissatisfied by the prescriptions these authors have advanced. These authors generally fail to appreciate the full extent of the threats and opportunities ahead. And even if they do see the true scale of these issues, the recommendations these authors propose strike me as being inadequate.

Therefore, I cannot keep silent.

Accordingly, I present in this new book the content of the Singularity Principles, brought up to date in the light of recent debates and new insights. The book also covers:

  • Why the Singularity Principles are sorely needed
  • The source and design of these principles
  • The significance of the term “Singularity”
  • Why there is so much unhelpful confusion about “the Singularity”
  • What’s different about the Singularity Principles, compared to recommendations of other analysts
  • The kinds of outcomes expected if these principles are followed
  • The kinds of outcomes expected if these principles are not followed
  • How you – dear reader – can, and should, become involved, finding your place in a growing coalition
  • How these principles are likely to evolve further
  • How these principles can be put into practice, all around the world – with the help of people like you.

The scope of the Principles

To start with, the Singularity Principles can and should be applied to the anticipation and management of the NBIC technologies that are at the heart of the current, fourth industrial revolution. NBIC – nanotech, biotech, infotech, and cognotech – is a quartet of four interlinked technological disruptions which are likely to grow significantly stronger as the 2020s unfold. Each of these four technological disruptions has the potential to fundamentally transform large parts of the human experience.

However, the same set of principles can and should also be applied to the anticipation and management of the core technology that will likely give rise to a fifth industrial revolution, namely the technology of AGI (artificial general intelligence), and the rapid additional improvements in artificial superintelligence that will likely follow fast on the footsteps of AGI.

The emergence of AGI is known as the technological singularity – or, more briefly, as the Singularity.

In other words, the Singularity Principles apply both:

  • To the longer-term lead-up to the Singularity, from today’s fast-improving NBIC technologies,
  • And to the shorter-term lead-up to the Singularity, as AI gains more general capabilities.

In both cases, anticipation and management of possible outcomes will be of vital importance.

By the way – in case it’s not already clear – please don’t expect a clever novel piece of technology, or some brilliant technical design, to somehow solve, by itself, the challenges posed by NBIC technologies and AGI. These challenges extend far beyond what could be wrestled into submission by some dazzling mathematical wizardry, by the incorporation of an ingenious new piece of silicon at the heart of every computer, or by any other “quick fix”. Indeed, the considerable effort being invested by some organisations in a search for that kind of fix is, arguably, a distraction from a sober assessment of the bigger picture.

Better technology, better product design, better mathematics, and better hardware can all be part of the full solution. But that full solution also needs, critically, to include aspects of organisational design, economic incentives, legal frameworks, and political oversight. That’s the argument I develop in the chapters ahead.

Extended table of contents

For your convenience, here’s a listing of the main section headings for all the chapters in this book.

0. Preface

  • Confusion, turbulence, and peril
  • This time it’s different
  • The scope of the Principles
  • Collective insight
  • The short form of the Principles
  • The four areas covered by the Principles
  • What lies ahead

1. Background: Ten essential observations

  • Tech breakthroughs are unpredictable (both timing and impact)
  • Potential complex interactions make prediction even harder
  • Changes in human attributes complicate tech changes
  • Greater tech power enables more devastating results
  • Different perspectives assess “good” vs. “bad” differently
  • Competition can be hazardous as well as beneficial
  • Some tech failures would be too drastic to allow recovery
  • A history of good results is no guarantee of future success
  • It’s insufficient to rely on good intentions
  • Wishful thinking predisposes blindness to problems

2. Fast-changing technologies: risks and benefits

  • Technology risk factors
  • Prioritising benefits?
  • What about ethics?
  • The transhumanist stance

2.1 Special complications with artificial intelligence

  • Problems with training data
  • The black box nature of AI
  • Interactions between multiple algorithms
  • Self-improving AI
  • Devious AI
  • Four catastrophic error modes
  • The broader perspective

2.2 The AI Control Problem

  • The gorilla problem
  • Examples of dangers with uncontrollable AI
  • Proposed solutions (which don’t work)
  • The impossibility of full verification
  • Emotion misses the point
  • No off switch
  • The ineffectiveness of tripwires
  • Escaping from confinement
  • The ineffectiveness of restrictions
  • No automatic super ethics
  • Issues with hard-wiring ethical principles

2.3 The AI Alignment Problem

  • Asimov’s Three Laws
  • Ethical dilemmas and trade-offs
  • Problems with proxies
  • The gaming of proxies
  • Simple examples of profound problems
  • Humans disagree
  • No automatic super ethics (again)
  • Other options for answers?

2.4 No easy solutions

  • No guarantees from the free market
  • No guarantees from cosmic destiny
  • Planet B?
  • Humans merging with AI?
  • Approaching the Singularity

3. What is the Singularity?

  • Breaking down the definition
  • Four alternative definitions
  • Four possible routes to the Singularity
  • The Singularity and AI self-awareness
  • Singularity timescales
  • Positive and negative singularities
  • Tripwires and canary signals
  • Moving forward

3.1 The Singularitarian Stance

  • AGI is possible
  • AGI could happen within just a few decades
  • Winner takes all
  • The difficulty of controlling AGI
  • Superintelligence and superethics
  • Not the Terminator
  • Opposition to the Singularitarian Stance

3.2 A complication: the Singularity Shadow

  • Singularity timescale determinism
  • Singularity outcome determinism
  • Singularity hyping
  • Singularity risk complacency
  • Singularity term overloading
  • Singularity anti-regulation fundamentalism
  • Singularity preoccupation
  • Looking forward

3.3 Bad reasons to deny the Singularity

  • The denial of death
  • How special is the human mind?
  • A credible positive vision

4. The question of urgency

  • Factors causing AI to improve
  • 15 options on the table
  • The difficulty of measuring progress
  • Learning from Christopher Columbus
  • The possibility of fast take-off

5. The Singularity Principles in depth

5.1 Analysing goals and potential outcomes

  • Question desirability
  • Clarify externalities
  • Require peer reviews
  • Involve multiple perspectives
  • Analyse the whole system
  • Anticipate fat tails

5.2 Desirable characteristics of tech solutions

  • Reject opacity
  • Promote resilience
  • Promote verifiability
  • Promote auditability
  • Clarify risks to users
  • Clarify trade-offs

5.3 Ensuring development takes place responsibly

  • Insist on accountability
  • Penalise disinformation
  • Design for cooperation
  • Analyse via simulations
  • Maintain human oversight

5.4 Evolution and enforcement

  • Build consensus regarding principles
  • Provide incentives to address omissions
  • Halt development if principles are not upheld
  • Consolidate progress via legal frameworks

6. Key success factors

  • Public understanding
  • Persistent urgency
  • Reliable action against noncompliance
  • Public funding
  • International support
  • A sense of inclusion and collaboration

7. Questions arising

7.1 Measuring human flourishing

  • Some example trade-offs
  • Updating the Universal Declaration of Human Rights
  • Constructing an Index of Human and Social Flourishing

7.2 Trustable monitoring

  • Moore’s Law of Mad Scientists
  • Four projects to reduce the dangers of WMDs
  • Detecting mavericks
  • Examples of trustable monitoring
  • Watching the watchers

7.3 Uplifting politics

  • Uplifting regulators
  • The central role of politics
  • Toward superdemocracy
  • Technology improving politics
  • Transcending party politics
  • The prospects for political progress

7.4 Uplifting education

  • Top level areas of the Vital Syllabus
  • Improving the Vital Syllabus

7.5 To AGI or not AGI?

  • Global action against the creation of AGI?
  • Possible alternatives to AGI?
  • A dividing line between AI and AGI?
  • A practical proposal

7.6 Measuring progress toward AGI

  • Aggregating expert opinions
  • Metaculus predictions
  • Alternative canary signals for AGI
  • AI index reports

7.7. Growing a coalition of the willing

  • Risks and actions

Image credit

The draft book cover shown above includes a design by Pixabay member Ebenezer42.

26 May 2021

A preview of Vital Foresight

Filed under: books, Vital Foresight — Tags: , , — David Wood @ 8:33 am

Update on 23rd June 2021: Vital Foresight has now been published as an ebook and as a paperback.

Here are the Amazon links:

The open preview mentioned in this post has now ended.

For more details about the book, including endorsements by early readers, see here.

The original blogpost follows:


Vital Foresight is almost ready.

That’s the title of the book I’ve been writing since August. It’s the most important book I’ve ever written.

The subtitle is The Case for Active Transhumanism.

Below, please find a copy of the Preface to Vital Foresight. The preface summarises the scope and intent of the book, and describes its target audience.

At this time, I am inviting people to take a look at previews of one or more of the chapters, and, if you feel inspired, to offer some feedback.

Here are examples of what I encourage you to make comments or suggestions about:

  • You particularly like some of the material
  • You dislike some of the material
  • You think contrary opinions should be considered
  • There appear to be mistakes in the spelling or grammar
  • The material is difficult to read or understand
  • The ideas could be expressed more elegantly
  • You have any other thoughts you wish to share.

Unless you indicate a preference for anonymity, reviewers will be thanked in the Acknowledgements section at the end of the book.

The chapters can be accessed as Google Doc files. Here’s the link to the starting point.

This article lists twenty key features of the book – topics it covers in unique ways.

And, for your convenience, here’s a copy of the Preface.

Preface

“Transhumanism”?

“Don’t put that word on the cover of your book!”

That’s the advice I received from a number of friends when they heard what I was writing about. They urged me to avoid “the ‘T’ word” – “transhumanism”. That word has bad vibes, they said. It’s toxic. T for toxic.

I understand where they’re coming from. Later in this book, I’ll dig into reasons why various people are uncomfortable with the whole concept. I’ll explain why I nevertheless see “transhumanism” as an apt term for a set of transformational ideas that will be key to our collective wellbeing in the 2020s and beyond. T for transformational. And, yes, T for timely.

As such, it’s a word that belongs on the cover of many more books, inspiring more conversations, more realisations, and more breakthroughs.

For now, in case you’re wondering, here’s a short definition. It’s by Oxford polymath Anders Sandberg, who expressed it like this in 1997:

Transhumanism is the philosophy that we can and should develop to higher levels, physically, mentally, and socially, using rational methods.

Sandberg’s 1997 webpage also features this summary from trailblazing Humanity+ Board member and Executive Director, Natasha Vita-More:

Transhumanism is a commitment to overcoming human limits in all their forms, including extending lifespan, augmenting intelligence, perpetually increasing knowledge, achieving complete control over our personalities and identities, and gaining the ability to leave the planet. Transhumanists seek to achieve these goals through reason, science, and technology.

In brief, transhumanism is a vision of the future: a vision of what’s possible, what’s desirable, and how it can be brought into reality.

In subsequent chapters, I’ll have lots more to say about the strengths and weaknesses of transhumanism. I’ll review the perceived threats and the remarkable opportunities that arise from it. But first, let me quickly introduce myself and how I came to be involved in the broader field of foresight (also known as futurism) within which transhumanism exists.

Smartphones and beyond

Over the twenty-five years that I held different roles within the mobile computing and smartphone industries, it was an increasingly central part of my job to think creatively and critically about future possibilities.

Back in the late 1980s and early 1990s, my work colleagues and I could see that computing technology was becoming ever more powerful. We debated long and hard, revisiting the same questions many times as forthcoming new hardware and software capabilities came to our attention. What kinds of devices should we design, to take advantage of these new capabilities? Which applications would users of these devices find most valuable? How might people feel as they interacted with different devices with small screens and compact keypads? Would the Internet ever become useful for “ordinary people”? Would our industry be dominated by powerful, self-interested corporations with monolithic visions, or would multiple streams of innovation flourish?

My initial involvement with these discussions was informal. Most of my time at work went into software engineering. But I enjoyed animated lunchtime discussions at Addison’s brasserie on Old Marylebone Road in central London, where technical arguments about, for example, optimising robust access to data structures, were intermingled with broader brainstorms about how we could collectively steer the future in a positive direction.

Over time, I set down more of my own ideas in writing, in emails and documents that circulated among teammates. I also had the good fortune to become involved in discussions with forward-thinking employees from giants of the mobile phone world – companies such as Nokia, Ericsson, Motorola, Panasonic, Sony, Samsung, Fujitsu, and LG, that were considering using our EPOC software (later renamed as “Symbian OS”) in their new handsets. I learned a great deal from these discussions.

By 2004 my job title was Executive VP for Research. It was my responsibility to pay attention to potential disruptions that could transform our business, either by destroying it, or by uplifting it. I came to appreciate that, in the words of renowned management consultant Peter Drucker, “the major questions regarding technology are not technical but human questions”. I also became increasingly persuaded that the disruptions of the smartphone market, significant though they were, were but a small preview of much larger disruptions to come.

As I’ll explain in the pages ahead, these larger disruptions could bring about a significant uplift in human character. Another possibility, however, is the destruction of much that we regard as precious.

Accordingly, the skills of foresight are more essential today than ever. We need to strengthen our collective capabilities in thinking creatively and critically about future possibilities – and in acting on the insights arising.

Indeed, accelerating technological change threatens to shatter the human condition in multiple ways. We – all of us – face profound questions over the management, not just of smartphones, but of artificial intelligence, nanoscale computers, bio-engineering, cognitive enhancements, ubiquitous robots, drone swarms, nuclear power, planet-scale geo-engineering, and much more.

What these technologies enable is, potentially, a world of extraordinary creativity, unprecedented freedom, and abundant wellbeing. That’s provided we can see clearly enough, in advance, the major disruptive opportunities we will need to seize and steer, so we can reach that destination. And provided we can step nimbly through a swath of treacherous landmines along the way.

That’s no small undertaking. It will take all our wisdom and strength. It’s going to require the very highest calibre of foresight.

That’s the reason why I’ve spent so much of my time in recent years organising and hosting hundreds of public meetings of the London Futurists community, both offline and online – events with the general headline of “serious analysis of radical scenarios for the next three to forty years”.

I acknowledge, however, that foresight is widely thought to have a poor track record. Forecasts of the future, whether foretelling doom and gloom, or envisioning technological cornucopia, seem to have been wrong at least as often as they have been right. Worse, instead of helping us to see future options more clearly, past predictions have, all too frequently, imposed mental blinkers, encouraged a stubborn fatalism, or distracted us from the truly vital risks and opportunities. It’s no wonder that the public reputation of futurism is scarcely better than that of shallow tabloid horoscopes.

To add to the challenge, our long-honed instincts about social norms and human boundaries prepare us poorly for the counterintuitive set of radical choices that emerging technology now dangles before us. We’re caught in a debilitating “future shock” of both fearful panic and awestruck wonder.

Happily, assistance is at hand. What this book will demonstrate is that vital foresight from the field I call active transhumanism can help us all:

  1. To resist unwarranted tech hype, whilst remaining aware that credible projections of today’s science and engineering could enable sweeping improvements in the human condition
  2. To distinguish future scenarios with only superficial attractions from those with lasting, sustainable benefits
  3. To move beyond the inaction of future shock, so we can coalesce around practical initiatives that advance deeply positive outcomes.

The audience for vital foresight

I’ve written this book for everyone who cares about the future:

  • Everyone trying to anticipate and influence the dramatic changes that may take place in their communities, organisations, and businesses over the next few years
  • Everyone concerned about risks of environmental disaster, the prevalence of irrationalism and conspiracy theories, growing inequality and social alienation, bioengineered pandemics, the decline of democracy, and the escalation of a Cold War 2.0
  • Everyone who has high hopes for technological solutions, but who is unsure whether key innovations can be adopted wisely enough and quickly enough
  • Everyone seeking a basic set of ethical principles suited for the increasing turbulence of the 2020s and beyond – principles that preserve the best from previous ethical frameworks, but which are open to significant updates in the wake of the god-like powers being bestowed on us by new technologies.

Although it reviews some pivotal examples from my decades of experience in business, this is not a book about the future of individual businesses or individual industries.

Nor is it a “get rich quick” book, or one that promotes “positive thinking” or better self-esteem. Look elsewhere, if that is what you seek.

Instead, it’s a book about the possibilities – indeed, the necessity – for radical transformation:

  • Transformation of human nature
  • Transformation of our social and political frameworks
  • Transformation of our relations with the environment and the larger cosmos
  • Transformation of our self-understanding – the narratives we use to guide all our activities.

Critically, this book contains practical suggestions for next steps to be taken, bearing in mind the power and pace of forces that are already remaking the world faster than was previously thought possible.

And it shows that foresight, framed well, can provide not only a stirring vision, but also the agility and resilience to cope with the many contingencies and dangers to be encountered on the journey forward.

Looking ahead

Here’s my summary of the most vital piece of foresight that I can offer.

Oncoming waves of technological change are poised to deliver either global destruction or a paradise-like sustainable superabundance, with the outcome depending on the timely elevation of transhumanist vision, transhumanist politics, and transhumanist education.

You’ll find that same 33-word paragraph roughly halfway through the book, in the chapter “Creativity”, in the midst of a dialogue about (can you guess…?) hedgehogs and foxes. I’ve copied the paragraph to the beginning of the book to help you see where my analysis will lead.

The summary is short, but the analysis will take some time. The scenarios that lie ahead for humanity – whether global destruction or sustainable superabundance – involve rich interactions of multiple streams of thought and activity. There’s a lot we’ll need to get our heads around, including disruptions in technology, health, culture, economics, politics, education, and philosophy. Cutting corners on understanding any one of these streams could yield a seriously misleading picture of our options for the future. Indeed, if we skimp on our analysis of future possibilities, we should not be surprised if humanity falls far short of our true potential.

However, I realise that each reader of this book will bring different concerns and different prior knowledge. By all means jump over various sections of the book to reach the parts that directly address the questions that are uppermost in your mind. Let the table of contents be your guide. If need be, you can turn back the pages later, to fill in any gaps in the narrative.

Better foresight springs, in part, from better hindsight. It’s particularly important to understand the differences between good foresight and bad foresight – to review past examples of each, learning from both the failures and, yes, the occasional successes of previous attempts to foresee and create the future. That’s one of our key tasks in the pages ahead.

In that quest, let’s move forward to an example from the rainbow nation of South Africa. Before we reach the hedgehogs and foxes, I invite you to spend some time with (can you guess…?) ostriches and flamingos.

==== Click here for the full preview, and to be able to make comments and suggestions ===

1 March 2021

The imminence of artificial consciousness

Filed under: AGI, books, brain simulation, London Futurists — Tags: , , — David Wood @ 10:26 am

I’ve changed my mind about consciousness.

I used to think that, of the two great problems about artificial minds – namely, achieving artificial general intelligence, and achieving artificial consciousness – progress toward the former would be faster than progress toward the latter.

After all, progress in understanding consciousness had seemed particularly slow, whereas enormous numbers of researchers in both academia and industry have been attaining breakthrough after breakthrough with new algorithms in artificial reasoning.

Over the decades, I’d read a number of books by Daniel Dennett and other philosophers who claimed to have shown that consciousness was basically already understood. There’s nothing spectacularly magical or esoteric about consciousness, Dennett maintained. What’s more, we must beware being misled by our own introspective understanding of our consciousness. That inner introspection is subject to distortions – perceptual illusions, akin to the visual illusions that often mislead us about what we think our eyes are seeing.

But I’d found myself at best semi-convinced by such accounts. I felt that, despite the clever analyses in such accounts, there was surely more to the story.

The most famous expression of the idea that consciousness still defied a proper understanding is the formulation by David Chalmers. This is from his watershed 1995 essay “Facing Up to the Problem of Consciousness”:

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect… There is something it is like to be a conscious organism. This subjective aspect is experience.

When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?

It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

However, as Wikipedia notes,

The existence of a “hard problem” is controversial. It has been accepted by philosophers of mind such as Joseph Levine, Colin McGinn, and Ned Block and cognitive neuroscientists such as Francisco Varela, Giulio Tononi, and Christof Koch. However, its existence is disputed by philosophers of mind such as Daniel Dennett, Massimo Pigliucci, Thomas Metzinger, Patricia Churchland, and Keith Frankish, and cognitive neuroscientists such as Stanislas Dehaene, Bernard Baars, Anil Seth and Antonio Damasio.

With so many smart people apparently unable to agree, what hope is there for a layperson to have any confidence in an answering the question, is consciousness already explained in principle, or do we need some fundamentally new insights?

It’s tempting to say, therefore, that the question should be left to one side. Instead of squandering energy spinning circles of ideas with little prospect of real progress, it would be better to concentrate on numerous practical questions: vaccines for pandemics, climate change, taking the sting out of psychological malware, protecting democracy against latent totalitarianism, and so on.

That practical orientation is the one that I have tried to follow most of the time. But there are four reasons, nevertheless, to keep returning to the question of understanding consciousness. A better understanding of consciousness might:

  1. Help provide therapists and counsellors with new methods to address the growing crisis of mental ill-health
  2. Change our attitudes towards the suffering we inflict, as a society, upon farm animals, fish, and other creatures
  3. Provide confidence on whether copying of memories and other patterns of brain activity, into some kind of silicon storage, could result at some future date in the resurrection of our consciousness – or whether any such reanimation would, instead, be “only a copy” of us
  4. Guide the ways in which systems of artificial intelligence are being created.

On that last point, consider the question whether AI systems will somehow automatically become conscious, as they gain in computational ability. Most AI researchers have been sceptical on that score. Google Maps is not conscious, despite all the profoundly clever things that it can do. Neither is your smartphone. As for the Internet as a whole, opinions are a bit more mixed, but again, the general consensus is that all the electronic processing happening on the Internet is devoid of the kind of subjective inner experience described by David Chalmers.

Yes, lots of software has elements of being self-aware. Such software contains models of itself. But it’s generally thought (and I agree, for what it’s worth) that such internal modelling is far short of subjective inner experience.

One prospect this raises is the dark possibility that humans might be superseded by AIs that are considerably more intelligent than us, but that such AIs would have “no-one at home”, that is, no inner consciousness. In that case, a universe with AIs instead of humans might have much more information processing, but be devoid of conscious feelings. Mega oops.

The discussion at this point is sometimes led astray by the popular notion that any threat from superintelligent AIs to human existence is predicated on these AIs “waking up” or become conscious. In that popular narrative, any such waking up might give an AI an additional incentive to preserve itself. Such an AI might adopt destructive human “alpha male” combative attitudes. But as I say, that’s a faulty line of reasoning. AIs might well be motivated to preserve themselves without ever gaining any consciousness. (Look up the concept of “basic AI drives” by Steve Omohundro.) Indeed, a cruise missile that locks onto a target poses a threat to that target, not because the missile is somehow conscious, but because it has enough intelligence to navigate to its target and explode on arrival.

Indeed, AIs can pose threats to people’s employment, without these AIs gaining consciousness. They can simulate emotions without having real internal emotions. They can create artistic masterpieces, using techniques such as GANs (Generative Adversarial Networks), without having any real psychological appreciation of the beauty of these works of art.

For these reasons, I’ve generally urged people to set aside the question of machine consciousness, and to focus instead on the question of machine intelligence. (For example, I presented that argument in Chapter 9 of my book Sustainable Superabundance.) The latter is tangible and poses increasing threats (and opportunities), whereas the former is a discussion that never seems to get off the ground.

But, as I mentioned at the start, I’ve changed my mind. I now think it’s possible we could have machines with synthetic consciousness well before we have machines with general intelligence.

What’s changed my mind is the book by Professor Mark Solms, The Hidden Spring: A Journey to the Source of Consciousness.

Solms is director of neuropsychology in the Neuroscience Institute of the University of Cape Town, honorary lecturer in neurosurgery at the Royal London Hospital School of Medicine, and an honorary fellow of the American College of Psychiatrists. He has spent his entire career investigating the mysteries of consciousness. He achieved renown within his profession for identifying the brain mechanisms of dreaming and for bringing psychoanalytic insights into modern neuroscience. And now his book The Hidden Spring is bringing him renown far beyond his profession. Here’s a selection of the praise it has received:

  • A remarkably bold fusion of ideas from psychoanalysis, psychology, and the frontiers of theoretical neuroscience, that takes aim at the biggest question there is. Solms will challenge your most basic beliefs.
    Matthew Cobb, author of The Idea of the Brain: The Past and Future of Neuroscience
  • At last the emperor has found some clothes! For decades, consciousness has been perceived as an epiphenomenon, little more than an illusion that can’t really make things happen. Solms takes a thrilling new approach to the problem, grounded in modern neurobiology but finding meaning in older ideas going back to Freud. This is an exciting book.
    Nick Lane, author of The Vital Question
  • To say this work is encyclopaedic is to diminish its poetic, psychological and theoretical achievement. This is required reading.
    Susie Orbach, author of In Therapy
  • Intriguing…There is plenty to provoke and fascinate along the way.
    Anil Seth, Times Higher Education
  • Solms’s efforts… have been truly pioneering. This unification is clearly the direction for the future.
    Eric Kandel, Nobel laureate for Physiology and Medicine
  • This treatment of consciousness and artificial sentience should be taken very seriously.
    Karl Friston, scientific director, Wellcome Trust Centre for Neuroimaging
  • Solms’s vital work has never ignored the lived, felt experience of human beings. His ideas look a lot like the future to me.
    Siri Hustvedt, author of The Blazing World
  • Nobody bewitched by these mysteries [of consciousness] can afford to ignore the solution proposed by Mark Solms… Fascinating, wide-ranging and heartfelt.
    Oliver Burkeman, Guardian
  • This is truly a remarkable book. It changes everything.
    Brian Eno

At times, I had to concentrate hard while listening to this book, rewinding the playback multiple times. That’s because the ideas kept sparking new lines of thought in my mind, which ran off in different directions as the narration continued. And although Solms explains his ideas in an engaging manner, I wanted to think through the deeper connections with the various fields that form part of the discussion – including psychoanalysis (Freud features heavily), thermodynamics (Helmholtz, Gibbs, and Friston), evolution, animal instincts, dreams, Bayesian statistics, perceptual illusions, and the philosophy of science.

Alongside the theoretical sections, the book contains plenty of case studies – from Solms’ own patients, and from other clinicians over the decades (actually centuries) – that illuminate the points being made. These studies involve people – or animals – with damage to parts of their brains. The unusual ways in which these subjects behave – and the unusual ways in which they express themselves – provide insight on how consciousness operates. Particularly remarkable are the children born with hydranencephaly – that is, without a cerebral cortex – but who nevertheless appear to experience feelings.

Having spent two weeks making my way through the first three quarters of the book, I took the time yesterday (Sunday) to listen to the final quarter, where there were several climaxes following on top of each other – addressing at length the “Hard Problem” ideas of David Chalmers, and the possibility of artificial consciousness.

It’s challenging to summarise such a rich set of ideas in just a few paragraphs, but here are some components:

  • To understand consciousness, the subcortical brain stem (an ancient part of our anatomy) is at least as important as the cognitive architecture of the cortex
  • To understand consciousness, we need to pay attention to feelings as much as to memories and thought processing
  • Likewise, the chemistry of long-range neuromodulators is at least as important as the chemistry of short-range neurotransmitters
  • Consciousness arises from particular kinds of homeostatic systems which are separated from their environment by a partially permeable boundary: a structure known as a “Markov blanket”
  • These systems need to take actions to preserve their own existence, including creating an internal model of their external environment, monitoring differences between incoming sensory signals and what their model predicted these signals would be, and making adjustments so as to prevent these differences from escalating
  • Whereas a great deal of internal processing and decision-making can happen automatically, without conscious thought, some challenges transcend previous programming, and demand greater attention

In short, consciousness arises from particular forms of information processing. (Solms provides good reasons to reject the idea that there is a basic consiciousness latent in all information, or, indeed, in all matter.) Whilst more work requires to be done to pin down the exact circumstances in which consciousness arises, this project is looking much more promising now, than it did just a few years ago.

This is no idle metaphysics. The ideas can in principle be tested by creating artificial systems that involve particular kinds of Markov blankets, uncertain environments that pose existential threats to the system, diverse categorical needs (akin to the multiple different needs of biologically conscious organisms), and layered feedback loops. Solms sets out a three-stage process whereby such systems could be built and evolved, in a relatively short number of years.

But wait. All kinds of questions arise. Perhaps the most pressing one is this: If such systems can be built, should we build them?

That “should we” question gets a lot of attention in the closing sections of the book. We might end up with AIs that are conscious slaves, in ways that we don’t have to worry about for our existing AIs. We might create AIs that feel pain beyond that which any previous conscious being has ever experienced it. Equally, we might create AIs that behave very differently from those without consciousness – AIs that are more unpredictable, more adaptable, more resourceful, more creative – and more dangerous.

Solms is doubtful about any global moratorium on such experiments. Now that the ideas are out of the bag, so to speak, there will be many people – in both academia and industry – who are motivated to do additional research in this field.

What next? That’s a question that I’ll be exploring this Saturday, 6th March, when Mark Solms will be speaking to London Futurists. The title of his presentation will be “Towards an artificial consciousness”.

For more details of what I expect will be a fascinating conversation – and to register to take part in the live question and answer portion of the event – follow the links here.

29 December 2020

The best book on the science of aging in the last ten years

Filed under: aging, books, rejuveneering, science, The Abolition of Aging — Tags: , — David Wood @ 10:44 am

Science points to many possibilities for aging to be reversed. Within a few decades, medical therapies based on these possibilities could become widespread and affordable, allowing all of us, if we wish, to remain in a youthful state for much longer than is currently the norm – perhaps even indefinitely. Instead of healthcare systems continuing to consume huge financial resources in order to treat people with the extended chronic diseases that become increasingly common as patients’ bodies age, much smaller expenditure would keep all of us much healthier for the vast majority of the time.

Nevertheless, far too many people fail to take these possibilities seriously. They believe that aging is basically inevitable, and that people who say otherwise are deluded and/or irresponsible.

Public opinion matters. Investments made by governments and by businesses alike are heavily influenced by perceived public reaction. Without active public support for smart investments in support of the science and medicine that could systematically reverse aging, that outcome will be pushed backwards in time – perhaps even indefinitely.

What can change this public opinion? An important part of the answer is to take the time to explain the science of aging in an accessible, engaging way – including the many recent experimental breakthroughs that, collectively, show such promise.

That’s exactly what Dr Andrew Steele accomplishes in his excellent book Ageless: The new science of getting older without getting old.

The audio version of this book became available on Christmas Eve, narrated by Andrew himself. It has been a delight to listen to it over the intervening days.

Over the last few years, I’ve learned a great deal from a number of books that address the science of aging, and I’ve been happy to recommend these books to wider audiences. These include:

But I hope that these esteemed authors won’t mind if I nominate Andrew Steele’s book as a better starting point into the whole subject. Here’s what’s special about it:

  • It provides a systematic treatment of the science, showing clear relationships between the many different angles to what is undeniably a complex subject
  • The way it explains the science seems just right for the general reader with a good basic education – neither over-simplified or over-dense
  • There’s good material all the way through the book, to keep readers turning the pages
  • The author is clearly passionate about his research, seeing it as important, but he avoids any in-your-face evangelism
  • The book avoids excessive claims or hyperbole: the claims it makes are, in my view, always well based
  • Where research results have been disappointing, there’s no attempt to hide these or gloss over them
  • The book includes many interesting anecdotes, but the point of these stories is always the science, rather than the personalities or psychologies of the researchers involved, or clashing business interests, or whatever
  • The information it contains is right up to date, as of late 2020.

Compared to other research, Ageless provides a slightly different decomposition of what is known as the hallmarks of aging, offering ten in total:

  1. DNA damage and mutations
  2. Trimmed telomeres
  3. Protein problems: autophagy, amyloids and adducts
  4. Epigenetic alterations
  5. Accumulation of senescent cells
  6. Malfunctioning mitochondria
  7. Signal failure
  8. Changes in the microbiome
  9. Cellular exhaustion
  10. Malfunction of the immune system

As the book points out, there are three criteria for something to be a useful “hallmark of aging”:

  1. It needs to increase with age
  2. Accelerating a hallmark’s progress should accelerate aging
  3. Reducing the hallmark should decrease aging

The core of the book is a fascinating survey of interventions that could reduce each of these hallmarks and thereby decrease aging – that is, decrease the probability of dying in the next year. These interventions are grouped into four categories:

  1. Remove
  2. Replace
  3. Repair
  4. Reprogram

Each category of intervention is in turn split into several subgroups. Yes, the treatment of aging is likely to be complicated. However, there are plenty of examples in which single interventions turned out to have multiple positive effects on different hallmarks of aging.

There are a couple of points where some readers might quibble with the content, for example regarding dietary supplements, or whether the concept of group selection can ever be useful within evolutionary theory.

However, my own presentations on the subject of the abolition of aging will almost certainly evolve in the light of the framework and examples in Ageless. I’m much the wiser from reading it.

Here’s my advice to anyone who, like me, believes the subject of reversing aging is important, and who wishes to accelerate progress in this field:

  • Read Ageless with some care, all the way through
  • Digest its contents and explore the implications, for example via discussion in online groups
  • Recommend others to read it too.

Ideally, a sizeable proportion of the book’s readers will alter their own research or other activity, in order to assist the projects covered in Ageless.

Finally, a brief comparison between Ageless and the remarkable grandfather book of this whole field: Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime, authored by Aubrey de Grey and Michael Rae. Ending Aging was published in 2007 and remains highly relevant, even though numerous experimental findings and new ideas have emerged since its publication. There’s a deep overlap in the basic approach advocated in the two books. Both books are written by polymaths who are evidently very bright – people who, incidentally, did their first research in fields outside biology, and who brought valuable external perspectives to the field.

So I see Ageless as a worthy successor to Ending Aging. Indeed, it’s probably a better starting point for people less familiar with this field, in view of its coverage of important developments since 2007, and some readers may find Andrew’s writing style more accessible.

Older Posts »

Blog at WordPress.com.