Here’s the latest in my thinking about how humanity can most reliably obtain wonderful benefits from advanced AI – a situation I describe as sustainable superabundance for all – rather than the horrific outcomes of a negative technological singularity – a situation I describe as Catastrophic General Intelligence (CGI).
These thoughts have sharpened in my mind following conversations at the recent SingularityNET BGI 2025 summit in Istanbul, Türkiye.
My conclusion is that, in order to increase the likelihood of the profoundly positive fork on the road ahead, it is necessary but not sufficient to highlight the real and credible dangers of the truly awful negative fork on that same road.
Yes, it is essential to highlight how a very plausible extension of our current reckless trajectory, past accelerating tipping points, will plunge humanity into a situation that is wildly unstable, dangerously opaque, and impossible to rein back. Clarifying these seismic risks is necessary, not to induce a state of panic (which would be counterproductive) or doom (which would be psychologically enfeebling), but to cause minds to focus with great seriousness. Without a sufficient sense of urgency, any actions taken will be inadequate: “too little, too late”.
However, unless that climactic warning is accompanied by an uplifting positive message, the result is likely to be misery, avoidance, distraction, self-deception, and disinformation.
If the only message heard is “pause” or “sacrifice”, our brains are likely to rebel.
If people already appreciate that advanced AI has the potential to solve aging, climate change, and more, that’s not an option they will give up easily.
If such people see no credible alternative to the AI systems currently being produced by big tech companies (notwithstanding the opaque and inexplicable nature of these systems), they are likely to object to efforts to alter that trajectory, complaining that “Any attempt to steer the development of advanced AI risks people dying from aging!”
The way out of this impasse is to establish that new forms of advanced AI can be prioritised, which lack dangerous features such as autonomy, volition, and inscrutability – new forms of AI that will still be able to deliver, quickly, the kinds of solution (including all-round rejuvenation) that people wish to obtain from AGI.
Examples of these new forms of advanced AI include “Scientist AI” (to use a term favoured by Yoshua Bengio) and “Tool AI” (the term favoured by Anthony Aguirre). These new forms potentially also include AI delivered on the ASI:Chain being created by F1r3fly and SingularityNET (as featured in talks at BGI 2025), and AI using neural networks trained by predictive coding (as described by Faezeh Habibi at that same summit).
These new forms of AI have architectures designed for transparency, controllability, and epistemic humility, rather than self-optimising autonomy.
It’s when the remarkable potential of these new, safer, forms of AI becomes clearer, that more people can be expected to snap out of their head-in-the-sand opposition to steering and controlling AGI development.
Once I returned home from Istanbul, I wrote up my reflections on what I called “five of the best” talks at BGI 2025. These reflections ended with a rather audacious analogy, which I repeat here:
The challenge facing us regarding runaway development of AI beyond our understanding and beyond our control can be compared to a major controversy within the field of preventing runaway climate change. That argument runs as follows:
- Existing patterns of energy use, which rely heavily on fuels that emit greenhouse gases, risk the climate reaching dangerous tipping points and transitioning beyond a “climate singularity” into an utterly unpredictable, chaotic, cataclysmically dangerous situation
- However, most consumers of energy prefer dirty sources to clean (“green”) sources, because the former have lower cost and appear to be more reliable (in the short term at least)
- Accordingly, without an autocratic world government (“yuk!”), there is almost no possibility of people switching away in sufficient numbers from dirty energy to clean energy
- Some observers might therefore be tempted to hope that theories of accelerating climate change are mistaken, and that there is no dangerous “climate singularity” in the near future
- In turn, that drives people to look for faults in parts of the climate change argumentation – cherry picking various potential anomalies in order to salve their conscience
- BUT this miserable flow of thought can be disrupted once it is seen how clean energy can be lower cost than dirty energy
- From this new perspective, there will be no need to plead with energy users to make sacrifices for the larger good; instead, these users will happily transition to abundant cleaner energy sources, for their short-term economic benefit as well as the longer-term environmental benefits.
You can likely see how a similar argument applies for safer development of trustworthy beneficial advanced AI:
- Existing AGI development processes, which rely heavily on poorly understood neural networks trained by back propagation, risk AI development reaching dangerous tipping points (when AIs repeatedly self-improve) and transitioning beyond a “technological singularity” into an utterly unpredictable, chaotic, cataclysmically dangerous situation
- However, most AI developers prefer opaque AI creation processes to transparent, explainable ones, because the former appear to produce more exciting results (in the short term at least)
- Accordingly, without an autocratic world government (“yuk!”), there is almost no possibility of developers switching away from their current reckless “suicide race” to build AGI first
- Some observers might therefore be tempted to hope that theories of AGI being “Unexplainable, Unpredictable, Uncontrollable” (as advanced for example by Roman Yampolskiy) are mistaken, and that there is no dangerous “technological singularity” in the future
- In turn, that drives people to look for faults in the work of Yampolskiy, Yoshua Bengio, Eliezer Yudkowsky, and others, cherry picking various potential anomalies in order to salve their conscience
- BUT this miserable flow of thought can be disrupted once it is seen how alternative forms of advanced AI can deliver the anticipated benefits of AGI without the terrible risks of currently dominant development methods
- From this new perspective, there will be no need to plead with AGI developers to pause their research for the greater good; instead, these developers will happily transition to safer forms of AI development.
To be clear, this makes things appear somewhat too simple. In both cases, the complication is that formidable inertial forces will need to be overcome – deeply entrenched power structures that, for various pathological reasons, are hell-bent on preserving the status quo.
For that reason, the battle for truly beneficial advanced AI is going to require great fortitude as well as great skill – skill not only in technological architectures but also in human social and political dynamics.
And also to be clear, it’s a tough challenge to identify and describe the dividing line between safe advanced AI and dangerous advanced AI (AI with its own volition, autonomy, and desire to preserve itself – as well as AI that is inscrutable and unmonitorable). Indeed, transparency and non-autonomy are not silver bullets. But that’s a challenge which it is vital for us to accept and progress.
Footnote: I offer additional practical advice on anticipating and managing cataclysmically disruptive technologies in my book The Singularity Principles.
