Strawmen. Non-sequiturs. Sleight of hand. Face-palm shockers. This book is full of them.
As such, it encourages a disastrously complacent attitude toward the risks posed by forthcoming new AI systems.
The number of times I shouted (aloud, or just in my head) to the narrator, appalled at what I had just heard, when yet another devious distortion reached my ears, far exceeds anything I remember from any previous book.
Ordinarily, I would have set the book aside, long before finishing it, in order to make more productive use of my limited time. But in this case, I was aware that many other readers have seemingly been taken in by all the chicanery in this book: witness its average Goodreads rating of 4.29 stars out of 5, from 466 ratings, at the time I am writing this blogpost. And from sampling some of the reviews, it’s clear that the book satisfies a psychological hunger present in many of its readers – a hunger to be scornful of some of the world’s wealthiest people.
What makes the book particularly dangerous is the way that it weaves its horrendous falsehoods into a narrative with plenty of juicy content. That’s how it lures readers into accepting its most egregious conclusions. Readers get swept along on a kind of feel-good public condemnation of larger-than-life villains. Since these villains tell people that AI is going to become more and more capable, that idea gets walloped too. Let’s hold these villains in contempt – and likewise hold in contempt their self-aggrandising ideas about AI superintelligence. Yah boo!
Thus, the book highlights the shortcomings of some of the world’s most famous entrepreneurs and technology leaders; more than that, it builds a narrative that, if these people (among them, Marc Andreessen, Jeff Bezos, Elon Musk, and Sam Altman) continue to acquire more power, it will likely have very bad consequences for the bulk of humanity. That’s because:
- These apparent titans over-estimate their own abilities, especially outside of their original domains of expertise
- They have deeply naïve expectations about how easy it will be for humanity to set up self-supporting colonies on other planets
- They are prone to a kind of self-righteous moral certainty which rides roughshod over the concerns of numerous critics.
That part of the narrative is correct. I give it three cheers. But where the book goes wildly wrong is in its associated narrative about not needing to be concerned with the emergence of AI which exceeds the understanding and control of its human designers. The way the book defends its wrong conclusions about AI is by setting up strawmen – weak imitations of the real concerns about AI superintelligence – and then pointing out flaws in these strawmen.
Motivations
I’ll come to these strawmen shortly. But first, I’ll express a bit more sympathy for at least part of what Adam Becker, the author of this book, is trying to do. He explains his motivation in a recent Singularity.FM interview with Nikola Danaylov:
Becker’s previous career followed a path in which I was personally also very interested at a similar stage in my life: a fascination with cosmology and theoretical physics. That evolved into a passion (which, again, I share) for clear communications about the meaning and implications of science. Becker’s first book, What is Real? The Unfinished Quest for the Meaning of Quantum Physics addresses the same topics on which I focussed for four years in the History and Philosophy of Science department in Cambridge in the mid 1980s. I’ve not read that book (yet), but based on various reviews, I believe I would agree with Becker’s main conclusions in that book.
Becker’s first concept for what he should write about in his second book also gets a thumbs up from me: evidence that many tech leaders in Silicon Valley have flawed views about key aspects of science – including flawed views about biology, psychology, and sociology, as well as the physics of space travel.
As Becker explained in his Singularity.FM interview, his ideas evolved further as he tried to write his second book. His scope widened to include analyses of some of the philosophical ideas which influence many of the movers and shakers in Big Tech – ideas such as longtermism, advocated by the Oxford philosopher Will MacAskill. I share with Becker a distaste for some of the conclusions of longtermism, though I’m less convinced that Becker provides adequate rebuttals to the longtermist argumentation. (Throughout the book, when analysing philosophical positions, Becker ladles on the critical whining naysaying, but he offers little as an alternative worldview, beyond very empty generalities.)
But where I absolutely part company with Becker is in my assessment of the idea of a potential forthcoming Technological Singularity, triggered by AI becoming increasingly capable. Becker roundly and freely condemns that idea as “unscientific”, “specious”, “imaginary”, and “hypothetical”.
Strawmen
Becker’s basic narrative is this: AI superintelligence will require a complete understanding of the human brain and a complete copying of what’s happening in the brain, down to a minute level. However, we’re still a long way from obtaining that understanding. Indeed, there are now reasons to believe that significant computation is taking place inside individual neurons (beyond a simple binary summation), and that various other types of braincell also contribute to human intelligence. Moreover, little progress has been made in recent years with brain scanning.
Now, this view of “understand the human brain first and copy that precisely” might have been the view of some AI researchers in the past, but since the revolutions of Deep Neural Networks (2012+) and Transformers (2018+), it’s clear that humanity could create AI with very dangerous capabilities without either of these preconditions. It’s more accurate to say that these AIs are being grown rather than being built. They are acquiring their capabilities via emergence rather than via detailed specification. To that extent, the book is stuck in the past.
These new AIs may or may not have all the same thinking processes that take place inside the human brain. They may or may not have aspects of what we call consciousness. That’s beside the point. What matters is whether the AI gains capabilities in observing, predicting, planning interventions, and learning from the results of its predictions and interventions. It is these capabilities that give AI its increasing power to intervene in the world.
This undermines another of the strawmen in Becker’s extensive collection – his claim that ideas of AI superintelligence wrongly presuppose that all intelligence can be reduced to a single parameter, ‘g’, standing for general intelligence. On the contrary, what matters is whether AI will operate outside of human understanding and human control. That’s already nearly happening. Yet Becker prefers to reassure his readers with some puffed up philosophising. (I lost track of the number of times he approvingly quoted cognitive scientists who seemingly reassured him that intelligence was too complicated a subject for there to be any worry about AI causing a real-world catastrophe.)
It’s like a prehistoric group of chimpanzees thinking to themselves that, in various ways, their individual capabilities exceed the corresponding capabilities of individual humans. Their equivalent of Adam Becker might say, “See, there’s no unified ‘h’ parameter for all the ways that humans allegedly out-perform chimpanzees. So don’t worry chaps, we chimpanzees will remain in control of our own destiny, and humans will forever remain as just weird naked apes.”
The next strawman is the assumption that the concern about out-of-control AI depends upon the maintenance of smooth exponential progress curves. Astonishingly, Becker devotes numerous pages to pointing out ways that exponential trends, such as Moore’s Law, slow down or even stop. This leads him to assert that “AI superintelligence is imaginary”. But the real question is: is more progress possible than we have already reached? In more detail:
- Can more efficient computational hardware be invented? (Answer: yes, including new types of chips dedicated to new kinds of AI.)
- Can extra data be fed into AI training? (Answer: yes, including cleverly constructed synthetic data.)
- Can new architectures, beyond transformers, be introduced? (Answer: yes, and AI researchers are pursuing numerous possibilities.)
- Can logical reasoning, such as chain-of-thought, be combined in productive new ways with existing neural networks? (Answer: yes, this is happening daily.)
- Is there some fundamental reason why the human brain is the ultimate apex of the skills of predicting, planning interventions, and learning? (Answer: no, unless you are a believer in six day creationism, or something equivalent.)
So, what matters isn’t the precise shape of the pace of progress. What matters is whether that progress can reach a point that enables AIs to improve the process of designing further AIs. That’s the tipping point which will introduce huge new uncertainty.
Becker tries to head off arguments that forthcoming new types of hardware, such as quantum computing, might bring AI closer. Quantum computing, as presently understood, isn’t suited to all computational tasks, he points out. But wait: the point is that it can significantly accelerate some computational tasks. AI can improve through smarter combinations of old-style hardware and new-style hardware. We don’t need to take Becker’s simplistic one-design-only approach as the end of the argument.
The slowdown in reaching new generations of traditional semiconductor chips does not mean the end of the broader attainment of wide benefits from improved hardware performance. Instead, AI progress now depends on how huge numbers of individual chips can be networked together. Moreover, with more hardware being available at lower cost, more widely distributed, this enables richer experimentation with new algorithms and new software architectures, thereby making yet more new AI breakthroughs more likely. Any idea that breakthroughs in AI have come to a brick wall is preposterous.
Next, Becker returns repeatedly to the strawman that the kinds of threats posed by AI superintelligence are just hypothetical and are far removed from our previous experience. Surely an AI that is able to self-introspect will be kinder, he argues. However, humans who are more intelligent – including having the ability to analyse their own thought processes – are by no means necessarily kinder. They may be psychopaths. Likewise, advanced AIs may be psychopaths – able to pretend concern for human wellbeing while that tactic suits them, but ready to incapacitate us all when the opportunity arises.
Indeed, the threats posed by ever more powerful AI are relatively straightforward extrapolations of dangers posed by existing AI systems (at the hands of human users who are hateful or naïve or resentful or simply out-of-their-depth). There’s no need to make any huge jump of imagination. That’s an argument I spell out in this Mindplex article.
Yet another strawman in the book is the idea that the danger-from-advanced-AI argument needs to be certain, and that it can be rejected if any uncertainty remains about it. Thus, when Becker finds AI safety advocates who are unwilling to pin down a precise probability for the likelihood of an AI-induced catastrophe, he switches from “uncertain about the chance of doom” to “unconcerned about the chance of doom”. When two different apparent experts offer opposing views on the likelihood of AI-induced doom, he always prefers the sceptic, and rushes to dismiss the other side. (Is he really so arrogant to think he has a better grasp of the possibilities of AI-induced catastrophe than the international team of experts assembled by Yoshua Bengio? Apparently, yes he is.)
One final outrageous tactic Becker uses to justify disregarding someone’s view is to point out a questionable claim that person has made in another area. Thus, Nick Bostrom has made some shocking statements about the difference in abilities between people of different races. Therefore, all Bostrom’s views about the dangers of AI superintelligence can be set aside. Elon Musk naively imagines it will be relatively easy to terraform Mars to make it suitable for human habitation. Therefore, all Musk’s views about the dangers of AI superintelligence can, again, be set aside. You get the picture.
Constructive engagement
Instead of scorning these concerns, Becker should be engaging constructively with the community of thoughtful people who are (despite adverse headwinds) painstakingly exploring ways to get the best out of AI whilst avoiding the risks of catastrophe. This includes the Singapore Consensus, the Future of Life Institute, the Council of Presidents of the United Nations General Assembly, Control AI, Pause AI, The Millennium Project, AI Safety, the Kira Center, the Machine Intelligence Research Institute, the Center for AI Safety Research, the Centre for the Governance of AI, the Center for Human Compatible AI, the Leverhulme Centre for the Future of Intelligence, my own book “The Singularity Principles”, and much more.
That kind of constructive engagement might not lead to as many juicy personal anecdotes as Becker sprinkles throughout More Everything Forever, but it would provide much better service to humanity.
Conversely, you might ask: aren’t there any lessons for me (and other AI safety activists) in the light of the shortcomings highlighted by Becker in the thoughts and actions of many people who take the idea of the Technological Singularity seriously? Shouldn’t I be grateful to Becker for pointing out various predictions made by Ray Kurzweil which haven’t come to pass, the casual attitudes seemingly displayed by some singularitarians toward present-day risks arising from abuses of existing technology (including the ongoing emissions of greenhouse gases), the blatant links between the 2023 Techno-Optimist Manifesto of Marc Andreessen and the proto-fascist 1909 Futurist Manifesto of Filippo Marinetti, and so on?
My answer: yes, but. Almost nothing in Becker’s book was new for me. I have since 2021 frequently given presentations on the subject of “The Singularity Shadow” (the concept first appeared in my book Vital Foresight) – a set of confusions and wishful thinking which surrounds the subject of the Technological Singularity:
These confusions and wishful thinking form a kind of shadow around the central concept of the Technological Singularity – a shadow which obstructs a clearer perception of the risks and opportunities that are actually the most significant.
The Singularity Shadow misleads many people that should know better. That shadow of confusion helps to explain why various university professors of the subject of artificial intelligence, along with people with job titles such as “Head of AI” in large companies, often make statements about the likely capabilities of forthcoming new AI platforms that are, frankly, full of errors or deeply misleading.
I describe that shadow as consisting of seven overlapping areas:
- Singularity timescale determinism
- Singularity outcome determinism
- Singularity hyping
- Singularity risk complacency
- Singularity term overloading
- Singularity anti-regulation fundamentalism
- Singularity preoccupation
To be clear, there is a dual problem with the Singularity Shadow:
- People within the shadow – singularity over-enthusiasts – make pronouncements about the Singularity that are variously overly optimistic, overly precise, or overly vague
- People outside the shadow – singularity over-critics – notice these instances of unwarranted optimism, precision, or vagueness, and jump to the wrong conclusion that the entire field of discussion is infected with the same flaws.
Here’s a video that reviews the seven areas in the Singularity Shadow, and the damage this Shadow causes to thoughtful discussions about both the opportunities and the threats arising from the Singularity:
And if you want to follow the conversation one more step, this video looks more deeply at the reasons why people (such as Becker) are so insistent that the Singularity is (in his words) “unscientific”, “specious”, “imaginary”, and “hypothetical”:
That’s the ‘but’ part of my “yes, but” answer. The ‘yes’ part is that, yes, I need to reflect: after so many years of trying to significantly improve the conversation about both the opportunities and risks of the Singularity, the public conversation about it is still often dominated by Becker-style distractions and confusions.
Clearly, I need to up my game. We all need to up our game.
AGW and AGI
I’ll finish with one point of consensus: Becker is highly critical, in his book, of people who use their intelligence to deny the risks of accelerated global warming (AGW). Becker, like me, sees these risks as deeply concerning. We are both dismayed when evidently clever people come up with deceptive arguments to avoid taking climate change seriously. The real risk here isn’t of linear climate change, but rather of the climate reaching thresholds known as tipping points, where greater heat leads to dramatic changes in the earth’s ecosystem that result in even greater heat. Sudden changes in temperature, akin to that just described, can be observed in ancient geological transition points.
It’s the unpredictability of what happens at these tipping points – and the uncertainty over where these tipping points are located – that means humanity should be doubling down, hard, on reversing our greenhouse gas emissions. (The best book I’ve read on this topic recently, by the way, is A Climate of Truth, by Mike Berners-Lee. I unhesitatingly recommend it.)
Yet despite these risks, AGW deniers argue as follows: there is plenty of uncertainty. There are lots of different ways of measuring temperature. There are lots of different forecasts. They don’t all agree. That means we have plenty of time to work out solutions. In the meantime, inaction is fine. (Face palm!)
I’ve spelt this out, because Becker is equally guilty. He’s not an AGW denier, but an AGI denier – denying that we need to pay any serious attention to the risks of Artificial General Intelligence. There is plenty of uncertainty about AGI, he argues. Disagreement about the best way to build it. No uniform definition of ‘g’, general intelligence. No agreement on future scenarios. Therefore, we have plenty of time to work out how to deal with any hypothetical future AGI. (Face palm again!)
Actually, this is not just a matter of a face palm. It’s a matter of the utmost seriousness. The unpredictability makes things worse, not better. Becker has allowed his intelligence to be subverted to obscure one of the biggest risks facing humanity. And because he evidently has an audience that is psychologically predisposed to lap up his criticism of Silicon Valley leaders, the confusion he peddles is likely to spread significantly.
It’s all the more reason to engage sincerely and constructively with the wider community who are working to ensure that advanced AI turns out beneficial (a “BGI”) instead of catastrophic (a “CGI”).

Well written and well argued. I wish I had done a better job interviewing Adam.
Comment by Nikola Danaylov — 1 August 2025 @ 6:12 pm
Well, everyone SHOULD of course get what AI is REALLY all about but most people CHOOSE not to want to understand it …
Like with every criminal inhumane self-concerned agenda of theirs the psychopaths-in-control sell and propagandize AI to the timelessly foolish (=”awake”) public with total lies such as AI being the benign means to connect, unit, transform, benefit, and save humanity.
The official narrative is… “trust official science” and “trust the authorities” but as with these and all other “official narratives” they want you to trust and believe …
“We’ll know our Disinformation Program is complete when everything the American public [and global public] believes is false.” —William Casey, a former CIA director=a leading psychopathic criminal of the genocidal US empire
“Repeating what others say and think is not being awake. Humans have been sold many lies…God, Jesus, Democracy, Money, Education, etc. If you haven’t explored your beliefs about life, then you are not awake.” — E.J. Doyle, songwriter
The 2 major OFFICIAL deceptive fake FEAR-MONGERING narratives or phony pretexts (ie, lies, propaganda) nearly everyone, including “alternative news” sources, have been spreading is (1) that the TRULY big threat is that AI just creates utter chaos in society and that it might achieve control over humans (therefore it must be regulated, ie monopolized by the typical criminal governments); and (2) that we, the US, have to invest heavily in AI technological development so as to stay ahead of other nations, such as China (https://archive.is/pBzAt).
The TRUE narrative (ie empirical reality) virtually no one talks about or spreads is that the TRULY big threat with AI is that AI allows the governing psychopaths-in-power to materialize their ultimate wet dream to control and enslave everyone and everything on the whole planet, a process that’s long been ongoing in front of everyone’s “awake” (=sleeping, dumb) nose …. https://www.rolf-hefti.com/covid-19-coronavirus.html
The proof is in the pudding… ask yourself, “how is the hacking of the planet going so far? Has it increased or crushed personal freedom?”
“AI responds according to the “rules” created by the programmers who are in turn owned by the people who pay their salaries. This is precisely why Globalists want an AI controlled society- rules for serfs, exceptions for the aristocracy.” —Unknown
“Almost all AI systems today learn when and what their human designers or users want.” —Ali Minai, Ph.D., American Professor of Computer Science, 2023
“Who masters those technologies [=artificial intelligence (AI), chatbots, and digital identities] —in some way— will be the master of the world.” — Klaus Schwab, at the World Government Summit in Dubai, 2023
“COVID is critical because this is what convinces people to accept, to legitimize, total biometric surveillance.” — Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]
“Scientists at the end of the war (WWII) were hanged for what scientists today are doing and getting away with.” — Dr. Barrie Trower, in 2012
“The whole idea that humans have this soul, or spirit, or free will … that’s over.” — Yuval Noah Harari, member of the dictatorial ruling mafia of psychopaths, World Economic Forum [https://archive.md/vrZGf]
Comment by Nickie — 2 August 2025 @ 4:27 am