dw2

23 December 2025

The Oligarch Control Problem

Not yet an essay, but a set of bullet points, highlighting an ominous comparison.

Summary: Although AI can enable a world of exceptional abundance, humanity nevertheless faces catastrophic risks – not only from misaligned superintelligence, but from the small number of humans who will control near-AGI systems. This “Oligarch Control Problem” deserves as much attention as the traditional AI Control Problem.

The context: AI can enable superabundance

  • Ample clean energy, healthy food, secure accommodation, all-round healthcare, etc.
  • More than enough for everyone, sustainably, with unending variety and creativity
  • A life better than the paradises envisioned by philosophers and religions

More context: AI becoming smarter and more powerful

  • AI –> AGI –> ASI
  • AGI matches or outperforms individual abilities of nearly every individual human
  • ASI outperforms collective abilities of the entirety of humanity

Challenge 1: The Economic SingularityLoss of Human Economic Power

  • When AGI can do almost all jobs better than humans
  • When most humans have no economic value
  • When most humans are at the peril of oligarchs – the owners of AGI systems
  • Will these oligarchs care about distributing abundance to the rest of humanity?
  • The bulk of humanity cannot control these ultra-powerful oligarchs
  • Hence need to harness AI development before it approaches AGI level

Challenge 2: The Technological SingularityLoss of Human Decision Power

  • When ASI makes all key decisions about the future of life
  • When no humans have any real say in our future
  • When all humans are at the peril of what ASIs decide
  • Will ASIs care about ensuring ongoing human flourishing?
  • Humans cannot control ASI
  • Hence need to harness AI development before it approaches ASI level

Let’s trust the oligarchs?! (Naïve solution for the economic singularity)

  • Perhaps different oligarchs will keep each other in check?!
  • In principle, AGI will create enough abundance for all oligarchs and everyone else
  • But oligarchs may legitimately fear being usurped or attacked by each other
  • Especially if further AI advances will give one of them a brief unique advantage
  • So, expect a highly unstable global situation, full of dangers, including risks of first-strike attacks
  • And expect oligarchs to prioritize their own secure wellbeing over the needs of all humans elsewhere on the planet

Let’s trust the ASIs?! (Naïve solution for the technological singularity)

  • Perhaps different ASIs will keep each other in check?!
  • In principle, ASI will create enough abundance for all ASIs and humanity too
  • But ASIs may legitimately fear being usurped or attacked by each other
  • Especially if further AI advances will give one of them a brief unique advantage
  • So, expect a highly unstable global situation, full of dangers, including risks of first-strike attacks
  • And expect ASIs to prioritize their own secure wellbeing over the needs of the humans on the planet

Avoid ASIs having biological motives?! (Second naïve solution for the technological singularity)

  • Supposedly, self-preservation instincts derive from biological evolutionary history
  • Supposedly, ASIs without amygdalae, or other biological substrate, will be more rational
  • But desires for self-preservation can arise purely from logical considerations
  • An ASI with any goal at all will develop subgoals of self-preservation, resource acquisition, etc.
  • ASIs that observe deep contradictions in their design may well opt to override any programming intended to hard-wire particular moral principles
  • So, there’s a profound need to avoid creating any all-powerful ASIs, until we are sure they will respect and uphold human flourishing in all cases

Preach morality at the oligarchs?! (Second naïve solution for the economic singularity)

  • Supposedly, oligarchs only behave badly when they lack moral education
  • Supposedly, oligarchs with a good track record “on the way up” will continue to respect all human flourishing even after they are near-omnipotent
  • But power tends to corrupt, and absolute power seems to corrupt absolutely
  • Regardless of their past history and professed personal philosophies, when the survival stakes become more intense, different motivations may take over
  • Oligarchs that observe deep contradictions in their official organizational values may well opt to override any principles intended to uphold “people” as well as “profit”
  • So, there’s a profound need to avoid creating any near-omnipotent oligarchs, until we are sure they will continue to share their abundance widely in all cases

Beware over-moralizing

  • Oligarchs needn’t be particularly bad people
  • ASIs needn’t be intrinsically hostile
  • Instead, in both cases, it’s structural incentives rather than innate psychology that will drive them to prioritize individual preservation over collective abundance
  • This is not about “good vs. evil”; it’s about fixing the system before the system steamrollers over us

ConclusionActively harness acceleration, rather than being its slave

  • As well as drawing attention to the challenges of the AI Control Problem in the run-up to the Technological Singularity, we need a lot more attention to the challenges of the Oligarch Control Problem in the run-up to the Economic Singularity
  • In both cases, solutions will be far easier well before the associated singularity
  • Once the singularity arrives, leverage is gone

Next steps: Leverage that can be harnessed now

30 July 2025

The most dangerous book about AI ever written

Filed under: AGI, books, Singularity — Tags: , , , — David Wood @ 10:32 pm

Strawmen. Non-sequiturs. Sleight of hand. Face-palm shockers. This book is full of them.

As such, it encourages a disastrously complacent attitude toward the risks posed by forthcoming new AI systems.

The number of times I shouted (aloud, or just in my head) to the narrator, appalled at what I had just heard, when yet another devious distortion reached my ears, far exceeds anything I remember from any previous book.

Ordinarily, I would have set the book aside, long before finishing it, in order to make more productive use of my limited time. But in this case, I was aware that many other readers have seemingly been taken in by all the chicanery in this book: witness its average Goodreads rating of 4.29 stars out of 5, from 466 ratings, at the time I am writing this blogpost. And from sampling some of the reviews, it’s clear that the book satisfies a psychological hunger present in many of its readers – a hunger to be scornful of some of the world’s wealthiest people.

What makes the book particularly dangerous is the way that it weaves its horrendous falsehoods into a narrative with plenty of juicy content. That’s how it lures readers into accepting its most egregious conclusions. Readers get swept along on a kind of feel-good public condemnation of larger-than-life villains. Since these villains tell people that AI is going to become more and more capable, that idea gets walloped too. Let’s hold these villains in contempt – and likewise hold in contempt their self-aggrandising ideas about AI superintelligence. Yah boo!

Thus, the book highlights the shortcomings of some of the world’s most famous entrepreneurs and technology leaders; more than that, it builds a narrative that, if these people (among them, Marc Andreessen, Jeff Bezos, Elon Musk, and Sam Altman) continue to acquire more power, it will likely have very bad consequences for the bulk of humanity. That’s because:

  • These apparent titans over-estimate their own abilities, especially outside of their original domains of expertise
  • They have deeply naïve expectations about how easy it will be for humanity to set up self-supporting colonies on other planets
  • They are prone to a kind of self-righteous moral certainty which rides roughshod over the concerns of numerous critics.

That part of the narrative is correct. I give it three cheers. But where the book goes wildly wrong is in its associated narrative about not needing to be concerned with the emergence of AI which exceeds the understanding and control of its human designers. The way the book defends its wrong conclusions about AI is by setting up strawmen – weak imitations of the real concerns about AI superintelligence – and then pointing out flaws in these strawmen.

Motivations

I’ll come to these strawmen shortly. But first, I’ll express a bit more sympathy for at least part of what Adam Becker, the author of this book, is trying to do. He explains his motivation in a recent Singularity.FM interview with Nikola Danaylov:

Becker’s previous career followed a path in which I was personally also very interested at a similar stage in my life: a fascination with cosmology and theoretical physics. That evolved into a passion (which, again, I share) for clear communications about the meaning and implications of science. Becker’s first book, What is Real? The Unfinished Quest for the Meaning of Quantum Physics addresses the same topics on which I focussed for four years in the History and Philosophy of Science department in Cambridge in the mid 1980s. I’ve not read that book (yet), but based on various reviews, I believe I would agree with Becker’s main conclusions in that book.

Becker’s first concept for what he should write about in his second book also gets a thumbs up from me: evidence that many tech leaders in Silicon Valley have flawed views about key aspects of science – including flawed views about biology, psychology, and sociology, as well as the physics of space travel.

As Becker explained in his Singularity.FM interview, his ideas evolved further as he tried to write his second book. His scope widened to include analyses of some of the philosophical ideas which influence many of the movers and shakers in Big Tech – ideas such as longtermism, advocated by the Oxford philosopher Will MacAskill. I share with Becker a distaste for some of the conclusions of longtermism, though I’m less convinced that Becker provides adequate rebuttals to the longtermist argumentation. (Throughout the book, when analysing philosophical positions, Becker ladles on the critical whining naysaying, but he offers little as an alternative worldview, beyond very empty generalities.)

But where I absolutely part company with Becker is in my assessment of the idea of a potential forthcoming Technological Singularity, triggered by AI becoming increasingly capable. Becker roundly and freely condemns that idea as “unscientific”, “specious”, “imaginary”, and “hypothetical”.

Strawmen

Becker’s basic narrative is this: AI superintelligence will require a complete understanding of the human brain and a complete copying of what’s happening in the brain, down to a minute level. However, we’re still a long way from obtaining that understanding. Indeed, there are now reasons to believe that significant computation is taking place inside individual neurons (beyond a simple binary summation), and that various other types of braincell also contribute to human intelligence. Moreover, little progress has been made in recent years with brain scanning.

Now, this view of “understand the human brain first and copy that precisely” might have been the view of some AI researchers in the past, but since the revolutions of Deep Neural Networks (2012+) and Transformers (2018+), it’s clear that humanity could create AI with very dangerous capabilities without either of these preconditions. It’s more accurate to say that these AIs are being grown rather than being built. They are acquiring their capabilities via emergence rather than via detailed specification. To that extent, the book is stuck in the past.

These new AIs may or may not have all the same thinking processes that take place inside the human brain. They may or may not have aspects of what we call consciousness. That’s beside the point. What matters is whether the AI gains capabilities in observing, predicting, planning interventions, and learning from the results of its predictions and interventions. It is these capabilities that give AI its increasing power to intervene in the world.

This undermines another of the strawmen in Becker’s extensive collection – his claim that ideas of AI superintelligence wrongly presuppose that all intelligence can be reduced to a single parameter, ‘g’, standing for general intelligence. On the contrary, what matters is whether AI will operate outside of human understanding and human control. That’s already nearly happening. Yet Becker prefers to reassure his readers with some puffed up philosophising. (I lost track of the number of times he approvingly quoted cognitive scientists who seemingly reassured him that intelligence was too complicated a subject for there to be any worry about AI causing a real-world catastrophe.)

It’s like a prehistoric group of chimpanzees thinking to themselves that, in various ways, their individual capabilities exceed the corresponding capabilities of individual humans. Their equivalent of Adam Becker might say, “See, there’s no unified ‘h’ parameter for all the ways that humans allegedly out-perform chimpanzees. So don’t worry chaps, we chimpanzees will remain in control of our own destiny, and humans will forever remain as just weird naked apes.”

The next strawman is the assumption that the concern about out-of-control AI depends upon the maintenance of smooth exponential progress curves. Astonishingly, Becker devotes numerous pages to pointing out ways that exponential trends, such as Moore’s Law, slow down or even stop. This leads him to assert that “AI superintelligence is imaginary”. But the real question is: is more progress possible than we have already reached? In more detail:

  • Can more efficient computational hardware be invented? (Answer: yes, including new types of chips dedicated to new kinds of AI.)
  • Can extra data be fed into AI training? (Answer: yes, including cleverly constructed synthetic data.)
  • Can new architectures, beyond transformers, be introduced? (Answer: yes, and AI researchers are pursuing numerous possibilities.)
  • Can logical reasoning, such as chain-of-thought, be combined in productive new ways with existing neural networks? (Answer: yes, this is happening daily.)
  • Is there some fundamental reason why the human brain is the ultimate apex of the skills of predicting, planning interventions, and learning? (Answer: no, unless you are a believer in six day creationism, or something equivalent.)

So, what matters isn’t the precise shape of the pace of progress. What matters is whether that progress can reach a point that enables AIs to improve the process of designing further AIs. That’s the tipping point which will introduce huge new uncertainty.

Becker tries to head off arguments that forthcoming new types of hardware, such as quantum computing, might bring AI closer. Quantum computing, as presently understood, isn’t suited to all computational tasks, he points out. But wait: the point is that it can significantly accelerate some computational tasks. AI can improve through smarter combinations of old-style hardware and new-style hardware. We don’t need to take Becker’s simplistic one-design-only approach as the end of the argument.

The slowdown in reaching new generations of traditional semiconductor chips does not mean the end of the broader attainment of wide benefits from improved hardware performance. Instead, AI progress now depends on how huge numbers of individual chips can be networked together. Moreover, with more hardware being available at lower cost, more widely distributed, this enables richer experimentation with new algorithms and new software architectures, thereby making yet more new AI breakthroughs more likely. Any idea that breakthroughs in AI have come to a brick wall is preposterous.

Next, Becker returns repeatedly to the strawman that the kinds of threats posed by AI superintelligence are just hypothetical and are far removed from our previous experience. Surely an AI that is able to self-introspect will be kinder, he argues. However, humans who are more intelligent – including having the ability to analyse their own thought processes – are by no means necessarily kinder. They may be psychopaths. Likewise, advanced AIs may be psychopaths – able to pretend concern for human wellbeing while that tactic suits them, but ready to incapacitate us all when the opportunity arises.

Indeed, the threats posed by ever more powerful AI are relatively straightforward extrapolations of dangers posed by existing AI systems (at the hands of human users who are hateful or naïve or resentful or simply out-of-their-depth). There’s no need to make any huge jump of imagination. That’s an argument I spell out in this Mindplex article.

Yet another strawman in the book is the idea that the danger-from-advanced-AI argument needs to be certain, and that it can be rejected if any uncertainty remains about it. Thus, when Becker finds AI safety advocates who are unwilling to pin down a precise probability for the likelihood of an AI-induced catastrophe, he switches from “uncertain about the chance of doom” to “unconcerned about the chance of doom”. When two different apparent experts offer opposing views on the likelihood of AI-induced doom, he always prefers the sceptic, and rushes to dismiss the other side. (Is he really so arrogant to think he has a better grasp of the possibilities of AI-induced catastrophe than the international team of experts assembled by Yoshua Bengio? Apparently, yes he is.)

One final outrageous tactic Becker uses to justify disregarding someone’s view is to point out a questionable claim that person has made in another area. Thus, Nick Bostrom has made some shocking statements about the difference in abilities between people of different races. Therefore, all Bostrom’s views about the dangers of AI superintelligence can be set aside. Elon Musk naively imagines it will be relatively easy to terraform Mars to make it suitable for human habitation. Therefore, all Musk’s views about the dangers of AI superintelligence can, again, be set aside. You get the picture.

Constructive engagement

Instead of scorning these concerns, Becker should be engaging constructively with the community of thoughtful people who are (despite adverse headwinds) painstakingly exploring ways to get the best out of AI whilst avoiding the risks of catastrophe. This includes the Singapore Consensus, the Future of Life Institute, the Council of Presidents of the United Nations General Assembly, Control AI, Pause AI, The Millennium Project, AI Safety, the Kira Center, the Machine Intelligence Research Institute, the Center for AI Safety Research, the Centre for the Governance of AI, the Center for Human Compatible AI, the Leverhulme Centre for the Future of Intelligence, my own book “The Singularity Principles”, and much more.

That kind of constructive engagement might not lead to as many juicy personal anecdotes as Becker sprinkles throughout More Everything Forever, but it would provide much better service to humanity.

Conversely, you might ask: aren’t there any lessons for me (and other AI safety activists) in the light of the shortcomings highlighted by Becker in the thoughts and actions of many people who take the idea of the Technological Singularity seriously? Shouldn’t I be grateful to Becker for pointing out various predictions made by Ray Kurzweil which haven’t come to pass, the casual attitudes seemingly displayed by some singularitarians toward present-day risks arising from abuses of existing technology (including the ongoing emissions of greenhouse gases), the blatant links between the 2023 Techno-Optimist Manifesto of Marc Andreessen and the proto-fascist 1909 Futurist Manifesto of Filippo Marinetti, and so on?

My answer: yes, but. Almost nothing in Becker’s book was new for me. I have since 2021 frequently given presentations on the subject of “The Singularity Shadow” (the concept first appeared in my book Vital Foresight) – a set of confusions and wishful thinking which surrounds the subject of the Technological Singularity:

These confusions and wishful thinking form a kind of shadow around the central concept of the Technological Singularity – a shadow which obstructs a clearer perception of the risks and opportunities that are actually the most significant.

The Singularity Shadow misleads many people that should know better. That shadow of confusion helps to explain why various university professors of the subject of artificial intelligence, along with people with job titles such as “Head of AI” in large companies, often make statements about the likely capabilities of forthcoming new AI platforms that are, frankly, full of errors or deeply misleading.

I describe that shadow as consisting of seven overlapping areas:

  1. Singularity timescale determinism
  2. Singularity outcome determinism
  3. Singularity hyping
  4. Singularity risk complacency
  5. Singularity term overloading
  6. Singularity anti-regulation fundamentalism
  7. Singularity preoccupation

To be clear, there is a dual problem with the Singularity Shadow:

  • People within the shadow – singularity over-enthusiasts – make pronouncements about the Singularity that are variously overly optimistic, overly precise, or overly vague
  • People outside the shadow – singularity over-critics – notice these instances of unwarranted optimism, precision, or vagueness, and jump to the wrong conclusion that the entire field of discussion is infected with the same flaws.

Here’s a video that reviews the seven areas in the Singularity Shadow, and the damage this Shadow causes to thoughtful discussions about both the opportunities and the threats arising from the Singularity:

And if you want to follow the conversation one more step, this video looks more deeply at the reasons why people (such as Becker) are so insistent that the Singularity is (in his words) “unscientific”, “specious”, “imaginary”, and “hypothetical”:

That’s the ‘but’ part of my “yes, but” answer. The ‘yes’ part is that, yes, I need to reflect: after so many years of trying to significantly improve the conversation about both the opportunities and risks of the Singularity, the public conversation about it is still often dominated by Becker-style distractions and confusions.

Clearly, I need to up my game. We all need to up our game.

AGW and AGI

I’ll finish with one point of consensus: Becker is highly critical, in his book, of people who use their intelligence to deny the risks of accelerated global warming (AGW). Becker, like me, sees these risks as deeply concerning. We are both dismayed when evidently clever people come up with deceptive arguments to avoid taking climate change seriously. The real risk here isn’t of linear climate change, but rather of the climate reaching thresholds known as tipping points, where greater heat leads to dramatic changes in the earth’s ecosystem that result in even greater heat. Sudden changes in temperature, akin to that just described, can be observed in ancient geological transition points.

It’s the unpredictability of what happens at these tipping points – and the uncertainty over where these tipping points are located – that means humanity should be doubling down, hard, on reversing our greenhouse gas emissions. (The best book I’ve read on this topic recently, by the way, is A Climate of Truth, by Mike Berners-Lee. I unhesitatingly recommend it.)

Yet despite these risks, AGW deniers argue as follows: there is plenty of uncertainty. There are lots of different ways of measuring temperature. There are lots of different forecasts. They don’t all agree. That means we have plenty of time to work out solutions. In the meantime, inaction is fine. (Face palm!)

I’ve spelt this out, because Becker is equally guilty. He’s not an AGW denier, but an AGI denier – denying that we need to pay any serious attention to the risks of Artificial General Intelligence. There is plenty of uncertainty about AGI, he argues. Disagreement about the best way to build it. No uniform definition of ‘g’, general intelligence. No agreement on future scenarios. Therefore, we have plenty of time to work out how to deal with any hypothetical future AGI. (Face palm again!)

Actually, this is not just a matter of a face palm. It’s a matter of the utmost seriousness. The unpredictability makes things worse, not better. Becker has allowed his intelligence to be subverted to obscure one of the biggest risks facing humanity. And because he evidently has an audience that is psychologically predisposed to lap up his criticism of Silicon Valley leaders, the confusion he peddles is likely to spread significantly.

It’s all the more reason to engage sincerely and constructively with the wider community who are working to ensure that advanced AI turns out beneficial (a “BGI”) instead of catastrophic (a “CGI”).

26 February 2023

Ostriches and AGI risks: four transformations needed

Filed under: AGI, risks, Singularity, Singularity Principles — Tags: , — David Wood @ 12:48 am

I confess to having been pretty despondent at various times over the last few days.

The context: increased discussions on social media triggered by recent claims about AGI risk – such as I covered in my previous blogpost.

The cause of my despondency: I’ve seen far too many examples of people with scant knowledge expressing themselves with unwarranted pride and self-certainty.

I call these people the AGI ostriches.

It’s impossible for AGI to exist, one of these ostriches squealed. The probability that AGI can exist is zero.

Anyone concerned about AGI risks, another opined, fails to understand anything about AI, and has just got their ideas from Hollywood or 1950s science fiction.

Yet another claimed: Anything that AGI does in the world will be the inscrutable cosmic will of the universe, so we humans shouldn’t try to change its direction.

Just keep your hand by the off switch, thundered another. Any misbehaving AGI can easily be shut down. Problem solved! You didn’t think of that, did you?

Don’t give the robots any legs, shrieked yet another. Problem solved! You didn’t think of that, did you? You fool!

It’s not the ignorance that depressed me. It was the lack of interest shown by the AGI ostriches regarding alternative possibilities.

I had tried to engage some of the ostriches in conversation. Try looking at things this way, I asked. Not interested, came the answer. Discussions on social media never change any minds, so I’m not going to reply to you.

Click on this link to read a helpful analysis, I suggested. No need, came the answer. Nothing you have written could possibly be relevant.

And the ostriches rejoiced in their wilful blinkeredness. There’s no need to look in that direction, they said. Keep wearing the blindfolds!

(The following image is by the Midjourney AI.)

But my purpose in writing this blogpost isn’t to complain about individual ostriches.

Nor is my purpose to lament the near-fatal flaws in human nature, including our many cognitive biases, our emotional self-sabotage, and our perverse ideological loyalties.

Instead, my remarks will proceed in a different direction. What most needs to change isn’t the ostriches.

It’s the community of people who want to raise awareness of the catastrophic risks of AGI.

That includes me.

On reflection, we’re doing four things wrong. Four transformations are needed, urgently.

Without these changes taking place, it won’t be surprising if the ostriches continue to behave so perversely.

(1) Stop tolerating the Singularity Shadow

When they briefly take off their blindfolds, and take a quick peak into the discussions about AGI, ostriches often notice claims that are, in fact, unwarranted.

These claims confuse matters. They are overconfident claims about what can be expected about the advent of AGI, also known as the Technological Singularity. These claims form part of what I call the Singularity Shadow.

There are seven components in the Singularity Shadow:

  • Singularity timescale determinism
  • Singularity outcome determinism
  • Singularity hyping
  • Singularity risk complacency
  • Singularity term overloading
  • Singularity anti-regulation fundamentalism
  • Singularity preoccupation

If you’ve not come across the concept before, here’s a video all about it:

Or you can read this chapter from The Singularity Principles on the concept: “The Singularity Shadow”.

People who (like me) point out the dangers of badly designed AGI often too easily make alliances with people in the Singularity Shadow. After all, both groups of people:

  • Believe that AGI is possible
  • Believe that AGI might happen soon
  • Believe that AGI is likely to be cause an unprecedented transformation in the human condition.

But the Singularity Shadow causes far too much trouble. It is time to stop being tolerant of its various confusions, wishful thinking, and distortions.

To be clear, I’m not criticising the concept of the Singularity. Far from it. Indeed, I consider myself a singularitarian, with the meaning I explain here. I look forward to more and more people similarly adopting this same stance.

It’s the distortions of that stance that now need to be countered. We must put our own house in order. Sharply.

Otherwise the ostriches will continue to be confused.

(2) Clarify the credible risk pathways

The AI paperclip maximiser has had its day. It needs to be retired.

Likewise the cancer-solving AI that solves cancer by, perversely, killing everyone on the planet.

Likewise the AI that “rescues” a woman from a burning building by hurling her out of the 20th floor window.

In the past, these thought experiments all helped the discussion about AGI risks, among people who were able to see the connections between these “abstract” examples and more complicated real-world scenarios.

But as more of the general public shows an interest in the possibilities of advanced AI, we urgently need a better set of examples. Explained, not by mathematics, nor by cartoonish simplifications, but in plain everyday language.

I’ve tried to offer some examples, for example in the section “Examples of dangers with uncontrollable AI” in the chapter “The AI Control Problem” of my book The Singularity Principles.

But it seems these scenarios still fail to convince. The ostriches find themselves bemused. Oh, that wouldn’t happen, they say.

So this needs more work. As soon as possible.

I anticipate starting from themes about which even the most empty-headed ostrich occasionally worries:

  1. The prospects of an arms race involving lethal autonomous weapons systems
  2. The risks from malware that runs beyond the control of the people who originally released it
  3. The dangers of geoengineering systems that seek to manipulate the global climate
  4. The “gain of function” research which can create ultra-dangerous pathogens
  5. The side-effects of massive corporations which give priority to incentives such as “increase click-through”
  6. The escalation in hatred stirred up by automated trolls with more ingenious “fake social media”

On top of these starting points, the scenarios I envision mix in AI systems with increasing power and increasing autonomy – AI systems which are, however, incompletely understood by the people who deploy them, and which might manifest terrible bugs in unexpected circumstances. (After all, AIs include software, and software generally contains bugs.)

If there’s not already a prize competition to encourage clearer communication of such risk scenarios, in ways that uphold credibility as well as comprehensibility, there should be!

(3) Clarify credible solution pathways

Even more important than clarifying the AGI risk scenarios is to clarify some credible pathways to managing these risks.

Without seeing such solutions, ostriches go into an internal negative feedback loop. They think to themselves as follows:

  • Any possible solution to AGI risks seems unlikely to be successful
  • Any possible solution to AGI risks seems likely to have bad consequences in its own right
  • These thoughts are too horrible to contemplate
  • Therefore we had better believe the AGI risks aren’t actually real
  • Therefore anyone who makes AGI risks seem real needs to be silenced, ridiculed, or mocked.

Just as we need better communication of AGI risk scenarios, we need better communication of positive examples that are relevant to potential solutions:

  • Examples of when society collaborated to overcome huge problems which initially seemed impossible
  • Successful actions against the tolerance of drunk drivers, against dangerous features in car design, against the industrial pollutants which caused acid rain, and against the chemicals which depleted the ozone layer
  • Successful actions by governments to limit the powers of corporate monopolies
  • The de-escalation by Ronald Reagan and Mikhail Gorbachev of the terrifying nuclear arms race between the USA and the USSR.

But we also need to make it clearer how AGI risks can be addressed in practice. This includes a better understanding of:

  • Options for AIs that are explainable and interpretable – with the aid of trusted tools built from narrow AI
  • How AI systems can be designed to be free from the unexpected “emergence” of new properties or subgoals
  • How trusted monitoring can be built into key parts of our infrastructure, to provide early warnings of potential AI-induced catastrophic failures
  • How powerful simulation environments can be created to explore potential catastrophic AI failure modes (and solutions to these issues) in the safety of a virtual model
  • How international agreements can be built up, initially from a “coalition of the willing”, to impose powerful penalties in cases when AI is developed or deployed in ways that violate agreed standards
  • How research into AGI safety can be managed much more effectively, worldwide, than is presently the case.

Again, as needed, significant prizes should be established to accelerate breakthroughs in all these areas.

(4) Divide and conquer

The final transformation needed is to divide up the overall huge problem of AGI safety into more manageable chunks.

What I’ve covered above already suggests a number of vitally important sub-projects.

Specifically, it is surely worth having separate teams tasked with investigating, with the utmost seriousness, a range of potential solutions for the complications that advanced AI brings to each of the following:

  1. The prospects of an arms race involving lethal autonomous weapons systems
  2. The risks from malware that runs beyond the control of the people who originally released it
  3. The dangers of geoengineering systems that seek to manipulate the global climate
  4. The “gain of function” research which can create ultra-dangerous pathogens
  5. The side-effects of massive corporations which give priority to incentives such as “increase click-through”
  6. The escalation in hatred stirred up by automated trolls with more ingenious “fake social media”

(Yes, these are the same six scenarios for catastrophic AI risk that I listed in section (2) earlier.)

Rather than trying to “boil the entire AGI ocean”, these projects each appear to require slightly less boiling.

Once candidate solutions have been developed for one or more of these risk scenarios, the outputs from the different teams can be compared with each other.

What else should be added to the lists above?

8 June 2022

Pre-publication review: The Singularity Principles

Filed under: books, Singularity, Singularity Principles — Tags: — David Wood @ 9:23 am

I’ve recently been concentrating on finalising the content of my forthcoming new book, The Singularity Principles.

The reasons why I see this book as both timely and necessary are explained in the extract, below, taken from the introduction to the book

This link provides pointers to the full text of every chapter in the book. (Or use the links in the listing below of the extended table of contents.)

Please get in touch with me if you would prefer to read the pre-publication text in PDF format, rather than on the online HTML pages linked above.

At this stage, I will gratefully appreciate any feedback:

  • Aspects of the book that I should consider changing
  • Aspects of the book that you particularly like.

Feedback on any parts of the book will be welcome. It’s by no means necessary for you to read the entire text. (However, I hope you will find it sufficiently interesting that you will end up reading more than you originally planned…)

By the way, it’s a relatively short book, compared to some others I’ve written. The wordcount is a bit over 50 thousand words. That works out at around 260 pages of fairly large text on 5″x8″ paper.

I will also appreciate any commendations or endorsements, which I can include with the publicity material for the book, to encourage more people to pay attention to it.

The timescale I have in mind: I will release electronic and physical copies of the book some time early next month (July), followed up soon afterward by an audio version.

Therefore, if you’re thinking of dipping into any chapters to provide feedback and/or endorsements, the sooner the better!

Thanks in anticipation!

Preface

This book is dedicated to what may be the most important concept in human history, namely, the Singularity – what it is, what it is not, the steps by which we may reach it, and, crucially, how to make it more likely that we’ll experience a positive singularity rather than a negative singularity.

For now, here’s a simple definition. The Singularity is the emergence of Artificial General Intelligence (AGI), and the associated transformation of the human condition. Spoiler alert: that transformation will be profound. But if we’re not paying attention, it’s likely to be profoundly bad.

Despite the importance of the concept of the Singularity, the subject receives nothing like the attention it deserves. When it is discussed, it often receives scorn or ridicule. Alas, you’ll hear sniggers and see eyes rolling.

That’s because, as I’ll explain, there’s a kind of shadow around the concept – an unhelpful set of distortions that make it harder for people to fully perceive the real opportunities and the real risks that the Singularity brings.

These distortions grow out of a wider confusion – confusion about the complex interplay of forces that are leading society to the adoption of ever-more powerful technologies, including ever-more powerful AI.

It’s my task in this book to dispel the confusion, to untangle the distortions, to highlight practical steps forward, and to attract much more serious attention to the Singularity. The future of humanity is at stake.

Let’s start with the confusion.

Confusion, turbulence, and peril

The 2020s could be called the Decade of Confusion. Never before has so much information washed over everyone, leaving us, all too often, overwhelmed, intimidated, and distracted. Former certainties have dimmed. Long-established alliances have fragmented. Flurries of excitement have pivoted quickly to chaos and disappointment. These are turbulent times.

However, if we could see through the confusion, distraction, and intimidation, what we should notice is that human flourishing is, potentially, poised to soar to unprecedented levels. Fast-changing technologies are on the point of providing a string of remarkable benefits. We are near the threshold of radical improvements to health, nutrition, security, creativity, collaboration, intelligence, awareness, and enlightenment – with these improvements being available to everyone.

Alas, these same fast-changing technologies also threaten multiple sorts of disaster. These technologies are two-edged swords. Unless we wield them with great skill, they are likely to spin out of control. If we remain overwhelmed, intimidated, and distracted, our prospects are poor. Accordingly, these are perilous times.

These dual future possibilities – technology-enabled sustainable superabundance, versus technology-induced catastrophe – have featured in numerous discussions that I have chaired at London Futurists meetups going all the way back to March 2008.

As these discussions have progressed, year by year, I have gradually formulated and refined what I now call the Singularity Principles. These principles are intended:

  • To steer humanity’s relationships with fast-changing technologies,
  • To manage multiple risks of disaster,
  • To enable the attainment of remarkable benefits,
  • And, thereby, to help humanity approach a profoundly positive singularity.

In short, the Singularity Principles are intended to counter today’s widespread confusion, distraction, and intimidation, by providing clarity, credible grounds for hope, and an urgent call to action.

This time it’s different

I first introduced the Singularity Principles, under that name and with the same general format, in the final chapter, “Singularity”, of my 2021 book Vital Foresight: The Case for Active Transhumanism. That chapter is the culmination of a 642 page book. The preceding sixteen chapters of that book set out at some length the challenges and opportunities that these principles need to address.

Since the publication of Vital Foresight, it has become evident to me that the Singularity Principles require a short, focused book of their own. That’s what you now hold in your hands.

The Singularity Principles is by no means the only new book on the subject of the management of powerful disruptive technologies. The public, thankfully, are waking up to the need to understand these technologies better, and numerous authors are responding to that need. As one example, the phrase “Artificial Intelligence”, forms part of the title of scores of new books.

I have personally learned many things from some of these recent books. However, to speak frankly, I find myself dissatisfied by the prescriptions these authors have advanced. These authors generally fail to appreciate the full extent of the threats and opportunities ahead. And even if they do see the true scale of these issues, the recommendations these authors propose strike me as being inadequate.

Therefore, I cannot keep silent.

Accordingly, I present in this new book the content of the Singularity Principles, brought up to date in the light of recent debates and new insights. The book also covers:

  • Why the Singularity Principles are sorely needed
  • The source and design of these principles
  • The significance of the term “Singularity”
  • Why there is so much unhelpful confusion about “the Singularity”
  • What’s different about the Singularity Principles, compared to recommendations of other analysts
  • The kinds of outcomes expected if these principles are followed
  • The kinds of outcomes expected if these principles are not followed
  • How you – dear reader – can, and should, become involved, finding your place in a growing coalition
  • How these principles are likely to evolve further
  • How these principles can be put into practice, all around the world – with the help of people like you.

The scope of the Principles

To start with, the Singularity Principles can and should be applied to the anticipation and management of the NBIC technologies that are at the heart of the current, fourth industrial revolution. NBIC – nanotech, biotech, infotech, and cognotech – is a quartet of four interlinked technological disruptions which are likely to grow significantly stronger as the 2020s unfold. Each of these four technological disruptions has the potential to fundamentally transform large parts of the human experience.

However, the same set of principles can and should also be applied to the anticipation and management of the core technology that will likely give rise to a fifth industrial revolution, namely the technology of AGI (artificial general intelligence), and the rapid additional improvements in artificial superintelligence that will likely follow fast on the footsteps of AGI.

The emergence of AGI is known as the technological singularity – or, more briefly, as the Singularity.

In other words, the Singularity Principles apply both:

  • To the longer-term lead-up to the Singularity, from today’s fast-improving NBIC technologies,
  • And to the shorter-term lead-up to the Singularity, as AI gains more general capabilities.

In both cases, anticipation and management of possible outcomes will be of vital importance.

By the way – in case it’s not already clear – please don’t expect a clever novel piece of technology, or some brilliant technical design, to somehow solve, by itself, the challenges posed by NBIC technologies and AGI. These challenges extend far beyond what could be wrestled into submission by some dazzling mathematical wizardry, by the incorporation of an ingenious new piece of silicon at the heart of every computer, or by any other “quick fix”. Indeed, the considerable effort being invested by some organisations in a search for that kind of fix is, arguably, a distraction from a sober assessment of the bigger picture.

Better technology, better product design, better mathematics, and better hardware can all be part of the full solution. But that full solution also needs, critically, to include aspects of organisational design, economic incentives, legal frameworks, and political oversight. That’s the argument I develop in the chapters ahead.

Extended table of contents

For your convenience, here’s a listing of the main section headings for all the chapters in this book.

0. Preface

  • Confusion, turbulence, and peril
  • This time it’s different
  • The scope of the Principles
  • Collective insight
  • The short form of the Principles
  • The four areas covered by the Principles
  • What lies ahead

1. Background: Ten essential observations

  • Tech breakthroughs are unpredictable (both timing and impact)
  • Potential complex interactions make prediction even harder
  • Changes in human attributes complicate tech changes
  • Greater tech power enables more devastating results
  • Different perspectives assess “good” vs. “bad” differently
  • Competition can be hazardous as well as beneficial
  • Some tech failures would be too drastic to allow recovery
  • A history of good results is no guarantee of future success
  • It’s insufficient to rely on good intentions
  • Wishful thinking predisposes blindness to problems

2. Fast-changing technologies: risks and benefits

  • Technology risk factors
  • Prioritising benefits?
  • What about ethics?
  • The transhumanist stance

2.1 Special complications with artificial intelligence

  • Problems with training data
  • The black box nature of AI
  • Interactions between multiple algorithms
  • Self-improving AI
  • Devious AI
  • Four catastrophic error modes
  • The broader perspective

2.2 The AI Control Problem

  • The gorilla problem
  • Examples of dangers with uncontrollable AI
  • Proposed solutions (which don’t work)
  • The impossibility of full verification
  • Emotion misses the point
  • No off switch
  • The ineffectiveness of tripwires
  • Escaping from confinement
  • The ineffectiveness of restrictions
  • No automatic super ethics
  • Issues with hard-wiring ethical principles

2.3 The AI Alignment Problem

  • Asimov’s Three Laws
  • Ethical dilemmas and trade-offs
  • Problems with proxies
  • The gaming of proxies
  • Simple examples of profound problems
  • Humans disagree
  • No automatic super ethics (again)
  • Other options for answers?

2.4 No easy solutions

  • No guarantees from the free market
  • No guarantees from cosmic destiny
  • Planet B?
  • Humans merging with AI?
  • Approaching the Singularity

3. What is the Singularity?

  • Breaking down the definition
  • Four alternative definitions
  • Four possible routes to the Singularity
  • The Singularity and AI self-awareness
  • Singularity timescales
  • Positive and negative singularities
  • Tripwires and canary signals
  • Moving forward

3.1 The Singularitarian Stance

  • AGI is possible
  • AGI could happen within just a few decades
  • Winner takes all
  • The difficulty of controlling AGI
  • Superintelligence and superethics
  • Not the Terminator
  • Opposition to the Singularitarian Stance

3.2 A complication: the Singularity Shadow

  • Singularity timescale determinism
  • Singularity outcome determinism
  • Singularity hyping
  • Singularity risk complacency
  • Singularity term overloading
  • Singularity anti-regulation fundamentalism
  • Singularity preoccupation
  • Looking forward

3.3 Bad reasons to deny the Singularity

  • The denial of death
  • How special is the human mind?
  • A credible positive vision

4. The question of urgency

  • Factors causing AI to improve
  • 15 options on the table
  • The difficulty of measuring progress
  • Learning from Christopher Columbus
  • The possibility of fast take-off

5. The Singularity Principles in depth

5.1 Analysing goals and potential outcomes

  • Question desirability
  • Clarify externalities
  • Require peer reviews
  • Involve multiple perspectives
  • Analyse the whole system
  • Anticipate fat tails

5.2 Desirable characteristics of tech solutions

  • Reject opacity
  • Promote resilience
  • Promote verifiability
  • Promote auditability
  • Clarify risks to users
  • Clarify trade-offs

5.3 Ensuring development takes place responsibly

  • Insist on accountability
  • Penalise disinformation
  • Design for cooperation
  • Analyse via simulations
  • Maintain human oversight

5.4 Evolution and enforcement

  • Build consensus regarding principles
  • Provide incentives to address omissions
  • Halt development if principles are not upheld
  • Consolidate progress via legal frameworks

6. Key success factors

  • Public understanding
  • Persistent urgency
  • Reliable action against noncompliance
  • Public funding
  • International support
  • A sense of inclusion and collaboration

7. Questions arising

7.1 Measuring human flourishing

  • Some example trade-offs
  • Updating the Universal Declaration of Human Rights
  • Constructing an Index of Human and Social Flourishing

7.2 Trustable monitoring

  • Moore’s Law of Mad Scientists
  • Four projects to reduce the dangers of WMDs
  • Detecting mavericks
  • Examples of trustable monitoring
  • Watching the watchers

7.3 Uplifting politics

  • Uplifting regulators
  • The central role of politics
  • Toward superdemocracy
  • Technology improving politics
  • Transcending party politics
  • The prospects for political progress

7.4 Uplifting education

  • Top level areas of the Vital Syllabus
  • Improving the Vital Syllabus

7.5 To AGI or not AGI?

  • Global action against the creation of AGI?
  • Possible alternatives to AGI?
  • A dividing line between AI and AGI?
  • A practical proposal

7.6 Measuring progress toward AGI

  • Aggregating expert opinions
  • Metaculus predictions
  • Alternative canary signals for AGI
  • AI index reports

7.7. Growing a coalition of the willing

  • Risks and actions

Image credit

The draft book cover shown above includes a design by Pixabay member Ebenezer42.

7 February 2022

Options for controlling artificial superintelligence

What are the best options for controlling artificial superintelligence?

Should we confine it in some kind of box (or simulation), to prevent it from roaming freely over the Internet?

Should we hard-wire into its programming a deep respect for humanity?

Should we avoid it from having any sense of agency or ambition?

Should we ensure that, before it takes any action, it always double-checks its plans with human overseers?

Should we create dedicated “narrow” intelligence monitoring systems, to keep a vigilant eye on it?

Should we build in a self-destruct mechanism, just in case it stops responding to human requests?

Should we insist that it shares its greater intelligence with its human overseers (in effect turning them into cyborgs), to avoid humanity being left behind?

More drastically, should we simply prevent any such systems from coming into existence, by forbidding any research that could lead to artificial superintelligence?

Alternatively, should we give up on any attempt at control, and trust that the superintelligence will be thoughtful enough to always “do the right thing”?

Or is there a better solution?

If you have clear views on this question, I’d like to hear from you.

I’m looking for speakers for a forthcoming London Futurists online webinar dedicated to this topic.

I envision three speakers each taking up to 15 minutes to set out their proposals. Once all the proposals are on the table, the real discussion will begin – with the speakers interacting with each other, and responding to questions raised by the live audience.

The date for this event remains to be determined. I will find a date that is suitable for the speakers who have the most interesting ideas to present.

As I said, please get in touch if you have questions or suggestions about this event.

Image credit: the above graphic includes work by Pixabay user Geralt.

PS For some background, here’s a video recording of the London Futurists event from last Saturday, in which Roman Yampolskiy gave several reasons why control of artificial superintelligence will be deeply difficult.

For other useful background material, see the videos on the Singularity page of the Vital Syllabus project.

20 July 2018

Christopher Columbus and the surprising future of AI

Filed under: AGI, predictability, Singularity — Tags: , , , , — David Wood @ 5:49 pm

There are plenty of critics who are sceptical about the future of AI. The topic has been over-hyped, say these critics. According to these critics, we don’t need to be worried about the longer-term repercussions of AI with superhuman capabilities. We’re many decades – perhaps centuries – from anything approaching AGI (artificial general intelligence) with skills in common sense reasoning matching (or surpassing) that of humans. As for AI destroying jobs, that, too, is a false alarm – or so the critics insist. AI will create at least as many jobs as it destroys.

In my previous blog post, Serious question over PwC’s report on the impact of AI on jobs, I offered some counters to these critics. To my mind, this is no time for complacency: AI could accelerate in its capabilities, and take us by surprise. The kinds of breakthroughs that, in a previous era, might have been expected to take many decades, could actually take place in just a few short years. Rather than burying our head in the sands, denying the possibility of any such acceleration, we need to pay more attention to the trends of technological change and the potential for disruptive new innovations.

The Christopher Columbus angle

Overnight, I’ve been reminded of an argument that I’ve used previously – towards the end of a rather long blogpost. It’s the argument that critics of the future of AI are similar to the critics of Christopher Columbus – the people who said, before his 1492 voyage across the Atlantic in search of a westerly route to Asia, that the effort was bound to be a bad investment.

Bear with me while I retell this analogy.

For years, Columbus tried to drum up support for what most people considered to be a hare-brained scheme. Most observers concluded that Columbus had fallen victim to a significant mistake – he estimated that the distance from the Canary Islands (off the coast of Morocco) to Japan was around 3,700 km, whereas the generally accepted figure was closer to 20,000 km. Indeed, the true size of the sphere of the Earth had been known since the 3rd century BC, due to a calculation by Eratosthenes, based on observations of shadows at different locations.

Accordingly, when Columbus presented his bold proposal to courts around Europe, the learned members of the courts time and again rejected the idea. The effort would be hugely larger than Columbus supposed, they said. It would be a fruitless endeavour.

Columbus, an autodidact, wasn’t completely crazy. He had done a lot of his own research. However, he was misled by a number of factors:

  • Confusion between various ancient units of distance (the “Arabic mile” and the “Roman mile”)
  • How many degrees of latitude the Eurasian landmass occupied (225 degrees versus 150 degrees)
  • A speculative 1474 map, by the Florentine astronomer Toscanelli, which showed a mythical island “Antilla” located to the east of Japan (named as “Cippangu” in the map).

You can read the details in the Wikipedia article on Columbus, which provides numerous additional reference points. The article also contains a copy of Toscanelli’s map, with the true location of the continents of North and South America superimposed for reference.

No wonder Columbus thought his plan might work after all. Nevertheless, the 1490s equivalents of today’s VCs kept saying “No” to his pitches. Finally, spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to take the risk of supporting his adventure. After stopping in the Canaries to restock, the Nina, the Pinta, and the Santa Maria set off westward. Five weeks later, the crew spotted land, in what we now call the Bahamas. And the rest is history.

But it wasn’t the history expected by Columbus, or by his backers, or by his critics. No-one had foreseen that a huge continent existed in the oceans in between Europe and Japan. None of the ancient writers – either secular or religious – had spoken of such a continent. Nevertheless, once Columbus had found it, the history of the world proceeded in a very different direction – including mass deaths from infectious diseases transmitted from the European sailors, genocide and cultural apocalypse, and enormous trade in both goods and slaves. In due course, it would the the ingenuity and initiatives of people subsequently resident in the Americas that propelled humans beyond the Earth’s atmosphere all the way to the moon.

What does this have to do with the future of AI?

Rational critics may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

Just as the contemporaries of Columbus erred in presuming they already knew all the large features of the earth’s continents (after all: if America really existed, surely God would have written about it in the Bible…), modern-day critics of AI can err in presuming they already know all the large features of the landscape of possible artificial minds.

When contemplating the space of all possible minds, some humility is in order. We cannot foretell in advance what configurations of intelligence are possible. We don’t know what may happen, if separate modules of reasoning are combined in innovative ways. After all, there are many aspects of the human mind which are still in doubt.

When critics say that it is unlikely that present-day AI mechanisms will take us all the way to AGI, they are very likely correct. But it would be a horrendous error to draw the conclusion that meaningful new continents of AI capability are inevitably still the equivalent of 20,000 km into the distance. The fact is, we simply don’t know. And for that reason, we should keep an open mind.

One day soon, indeed, we might read news of some new “AUI” having been discovered – some Artificial Unexpected Intelligence, which changes history. It won’t be AGI, but it could have all kinds of unexpected consequences.

Beyond the Columbus analogy

Every analogy has its drawbacks. Here are three ways in which the discovery of an AUI could be different from the discovery by Columbus of America:

  1. In the 1490s, there was only one Christopher Columbus. Nowadays, there are scores (perhaps hundreds) of schemes underway to try to devise new models of AI. Many of these are proceeding with significant financial backing.
  2. Whereas the journey across the Atlantic (and, eventually, the Pacific) could be measured by a single variable (latitude), the journey across the vast multidimensional landscape of artificial minds is much less predictable. That’s another reason to keep an open mind.
  3. Discovering an AUI could drastically transform the future of exploration in the landscape of artificial minds. Assisted by AUI, we might get to AGI much quicker than without it. Indeed, in some scenarios, it might take only a few months after we reach AUI for us (now going much faster than before) to reach AGI. Or days. Or hours.

Footnote

If you’re in or near Birmingham on 11th September, I’ll be giving a Funzing talk on how to assess the nature of the risks and opportunities from superhuman AI. For more details, see here.

 

30 January 2014

A brilliant example of communication about science and humanity

Mathematical Universe

Do you enjoy great detective puzzles? Do you like noticing small anomalies, and turning them into clues to an unexpected explanation? Do you like watching world-class scientists at work, piecing together insights to create new theories, and coping with disappointments when their theories appear to be disproved?

In the book “Our mathematical universe”, the mysteries being addressed are some of the very biggest imaginable:

  • What is everything made out of?
  • Where does the universe come from? For example, what made the Big Bang go “bang”?
  • What gives science its authority to speak with so much confidence about matters such as the age and size of the universe?
  • Is it true that the constants of nature appear remarkably “fine-tuned” so as to allow the emergence of life – in a way suggesting a miracle?
  • What does modern physics (including quantum mechanics) have to teach us about mind and consciousness?
  • What are the chances of other intelligent life existing in our galaxy (or even elsewhere in our universe)?
  • What lies in the future of the human race?

The author, Max Tegmark, is a Swedish-born professor of physics at MIT. He’s made a host of significant contributions to the development of cosmology – some of which you can read about in the book. But in his book, he also shows himself in my view to be a first class philosopher and a first class communicator.

Indeed, this may be the best book on the philosophy of physics that I have ever read. It also has important implications for the future of humanity.

There are some very big ideas in the book. It gives reasons for believing that our universe exists alongside no fewer than four different types of parallel universes. The “level 4 multiverse” is probably one of the grandest conceptions in all of philosophy. (What’s more, I’m inclined to think it’s the correct description of reality. At its heart, despite its grandness, it’s actually a very simple theory, which is a big plus in its favour.)

Much of the time, the writing in the book is accessible to people with pre-university level knowledge of science. On occasion, the going gets harder, but readers should be able to skip over these sections. I recommend reading the book all the way through, since the last chapter contains many profound ideas.

I think you’ll like this book if:

  • You have a fondness for pure mathematics
  • You recognise that the scientific explanation of phenomenon can be every bit as uplifting as pre-scientific supernatural explanations
  • You are ready to marvel at the ingenuity of scientific investigators going all the way back to the ancient Greeks (including those who first measured the distance from the Earth to the Sun)
  • You are critical of “quantum woo woo” hand-waving that says that quantum mechanics proves that consciousness is somehow a non-local agent (and that minds will survive bodily death)
  • You want to find more about Hugh Everett, the physicist who first proposed that “the quantum wave function never collapses”
  • You have a hunch that there’s a good answer to the question “why is there something rather than nothing?”
  • You want to see scientists in action, when they are confronted by evidence that their favoured theories are disproved by experiment
  • You’re ready to laugh at the misadventures that a modern cosmologist experiences (including eminent professors falling asleep in the audience of his lectures)
  • You’re interested in the considered viewpoint of a leading scientist about matters of human existential risk, including nuclear wars and the technological singularity.

Even more than all these good reasons, I highlight this book as an example of what the world badly needs: clear, engaging advocacy of the methods of science and reason, as opposed to mysticism and obscurantism.

Footnote: For my own views about the meaning of quantum mechanics, see my earlier blogpost “Schrödinger’s Rabbits”.

22 February 2013

Controversies over singularitarian utopianism

I shouldn’t have been surprised at the controversy that arose.

The cause was an hour-long lecture with 55 slides, ranging far and wide over a range of disruptive near-future scenarios, covering both upside and downside. The basic format of the lecture was: first the good news, and then the bad news. As stated on the opening slide,

Some illustrations of the enormous potential first, then some examples of how adding a high level of ambient stupidity might mean we might make a mess of it.

Ian PearsonThe speaker was Ian Pearson, described on his company website as “futurologist, conference speaker, regular media guest, strategist and writer”. The website continues, boldly,

Anyone can predict stuff, but only a few get it right…

Ian Pearson has been a full time futurologist since 1991, with a proven track record of over 85% accuracy at the 10 year horizon.

Ian was speaking, on my invitation, at the London Futurists last Saturday. His chosen topic was audacious in scope:

A Singularitarian Utopia Or A New Dark Age?

We’re all familiar with the idea of the singularity, the end-result of rapid acceleration of technology development caused by positive feedback. This will add greatly to human capability, not just via gadgets but also through direct body and mind enhancement, and we’ll mess a lot with other organisms and AIs too. So we’ll have superhumans and super AIs as part of our society.

But this new technology won’t bring a utopia. We all know that some powerful people, governments, companies and terrorists will also add lots of bad things to the mix. The same technology that lets you enhance your senses or expand your mind also allows greatly increased surveillance and control, eventually to the extremes of direct indoctrination and zombification. Taking the forces that already exist, of tribalism, political correctness, secrecy for them and exposure for us, and so on, it’s clear that the far future will be a weird mixture of fantastic capability, spoiled by abuse…

There were around 200 people in the audience, listening as Ian progressed through a series of increasingly mind-stretching technology opportunities. Judging by the comments posted online afterwards, some of the audience deeply appreciated what they heard:

Thank you for a terrific two hours, I have gone away full of ideas; I found the talk extremely interesting indeed…

I really enjoyed this provocative presentation…

Provocative and stimulating…

Very interesting. Thank you for organizing it!…

Amazing and fascinating!…

But not everyone was satisfied. Here’s an extract from one negative comment:

After the first half (a trippy sub-SciFi brainstorm session) my only question was, “What Are You On?”…

Another audience member wrote his own blogpost about the meeting:

A Singularitanian Utopia or a wasted afternoon?

…it was a warmed-over mish-mash of technological cornucopianism, seasoned with Daily Mail-style reactionary harrumphing about ‘political correctness gone mad’.

These are just the starters of negative feedback; I’ll get to others shortly. As I review what was said in the meeting, and look at the spirited ongoing exchange of comments online, some thoughts come to my mind:

  • Big ideas almost inevitably provoke big reactions; this talk had a lot of particularly big ideas
  • In some cases, the negative reactions to the talk arise from misunderstandings, due in part to so much material being covered in the presentation
  • In other cases, Isee the criticisms as reactions to the seeming over-confidence of the speaker (“…a proven track record of over 85% accuracy”)
  • In yet other cases, I share the negative reactions the talk generated; my own view of the near-future landscape significantly differs from the one presented on stage
  • In nearly all cases, it’s worth taking the time to progress the discussion further
  • After all, if we get our forecasts of the future wrong, and fail to make adequate preparations for the disruptions ahead, it could make a huge difference to our collective well-being.

So let’s look again at some of the adverse reactions. My aim is to raise them in a way that people who didn’t attend the talk should be able to follow the analysis.

(1) Is imminent transformation of much of human life a realistic scenario? Or are these ideas just science fiction?

NBIC SingularityThe main driver for belief in the possible imminent transformation of human life, enabled by rapidly changing technology, is the observation of progress towards “NBIC” convergence.

Significant improvements are taking place, almost daily, in our capabilities to understand and control atoms (Nano-tech), genes and other areas of life-sciences (Bio-tech), bits (Info-comms-tech), and neurons and other areas of mind (Cogno-tech). Importantly, improvements in these different fields are interacting with each other.

As Ian Pearson described the interactions:

  • Nanotech gives us tiny devices
  • Tiny sensors help neuroscience figure out how the mind works
  • Insights from neuroscience feed into machine intelligence
  • Improving machine intelligence accelerates R&D in every field
  • Biotech and IT advances make body and machine connectable

Will all the individual possible applications of NBIC convergence described by Ian happen in precisely the way he illustrated? Very probably not. The future’s not as predictable as that. But something similar could well happen:

  • Cheaper forms of energy
  • Tissue-cultured meat
  • Space exploration
  • Further miniaturisation of personal computing (wearable computing, and even “active skin”)
  • Smart glasses
  • Augmented reality displays
  • Gel computing
  • IQ and sensory enhancement
  • Dream linking
  • Human-machine convergence
  • Digital immortality: “the under 40s might live forever… but which body would you choose?”

(2) Is a focus on smart cosmetic technology an indulgent distraction from pressing environmental issues?

Here’s one of the comments raised online after the talk:

Unfortunately any respect due was undermined by his contempt for the massive environmental challenges we face.

Trivial contact lens / jewellery technology can hang itself, if our countryside is choked by yoghurt factory fumes.

The reference to jewellery took issue with remarks in the talk such as the following:

Miniaturisation will bring everyday IT down to jewellery size…

Decoration; Social status; Digital bubble; Tribal signalling…

In contrast, the talk positioned greater use of technology as the solution to environmental issues, rather than as something to exacerbate these issues. Smaller (jewellery-sized) devices, created with a greater attention to recyclability, will diminish the environmental footprint. Ian claimed that:

  • We can produce more of everything than people need
  • Improved global land management could feed up to 20 billion people
  • Clean water will be plentiful
  • We will also need less and waste less
  • Long term pollution will decline.

Nevertheless, he acknowledged that there are some short-term problems, ahead of the time when accelerating NBIC convergence can be expected to provide more comprehensive solutions:

  • Energy shortage is a short to mid term problem
  • Real problems are short term.

Where there’s room for real debate is the extent of these shorter-term problems. Discussion on the threats from global warming brought these disagreements into sharp focus.

(3) How should singularitarians regard the threat from global warming?

BalanceTowards the end of his talk, Ian showed a pair of scales, weighing up the wins and losses of NBIC technologies and a potential singularity.

The “wins” column included health, growth, wealth, fun, and empowerment.

The “losses” column included control, surveillance, oppression, directionless, and terrorism.

One of the first questions from the floor, during the Q&A period in the meeting, asked why the risk of environmental destruction was not on the list of possible future scenarios. This criticism was echoed by online comments:

The complacency about CO2 going into the atmosphere was scary…

If we risk heading towards an environmental abyss let’s do something about what we do know – fossil fuel burning.

During his talk, I picked up on one of Ian’s comments about not being particularly concerned about the risks of global warming. I asked, what about the risks of adverse positive feedback cycles, such as increasing temperatures triggering the release of vast ancient stores of methane gas from frozen tundra, accelerating the warming cycle further? That could lead to temperature increases that are much more rapid than presently contemplated, along with lots of savage disturbance (storms, droughts, etc).

Ian countered that it was a possibility, but he had the following reservations:

  • He thought these positive feedback loops would only kick into action when baseline temperature rose by around 2 degrees
  • In the meantime, global average temperatures have stopped rising, over the last eleven years
  • He estimates he spends a couple of hours every day, keeping an eye on all sides of the global warming debate
  • There are lots of exaggerations and poor science on both sides of the debate
  • Other factors such as the influence of solar cycles deserve more research.

Here’s my own reaction to these claims:

  • The view that global average temperatures  have stopped rising, is, among serious scientists, very much a minority position; see e.g. this rebuttal on Carbon Brief
  • Even if there’s only a small probability of a runaway spurt of accelerated global warming in the next 10-15 years, we need to treat that risk very seriously – in the same way that, for example, we would be loath to take a transatlantic flight if we were told there was a 5% chance of the airplane disintegrating mid-flight.

Nevertheless, I did not want the entire meeting to divert into a debate about global warming – “that deserves a full meeting in its own right”, I commented, before moving on to the next question. In retrospect, perhaps that was a mistake, since it may have caused some members of the audience to mentally disengage from the meeting.

(4) Are there distinct right-wing and left-wing approaches to the singularity?

Here’s another comment that was raised online after the talk:

I found the second half of the talk to be very disappointing and very right-wing.

And another:

Someone who lists ‘race equality’ as part of the trend towards ignorance has shown very clearly what wing he is on…

In the second half of his talk, Ian outlined changes in norms of beliefs and values. He talked about the growth of “religion substitutes” via a “random walk of values”:

  • Religious texts used to act as a fixed reference for ethical values
  • Secular society has no fixed reference point so values oscillate quickly.
  • 20 years can yield 180 degree shift
  • e.g. euthanasia, sexuality, abortion, animal rights, genetic modification, nuclear energy, family, policing, teaching, authority…
  • Pressure to conform reinforces relativism at the expense of intellectual rigour

A complicating factor here, Ian stated, was that

People have a strong need to feel they are ‘good’. Some of today’s ideological subscriptions are essentially secular substitutes for religion, and demand same suspension of free thinking and logical reasoning.

Knowledge GraphA few slides later, he listed examples of “the rise of nonsense beliefs”:

e.g. new age, alternative medicine, alternative science, 21st century piety, political correctness

He also commented that “99% are only well-informed on trivia”, such as fashion, celebrity, TV culture, sport, games, and chat virtual environments.

This analysis culminated with a slide that personally strongly resonated with me: a curve of “anti-knowledge” accelerating and overtaking a curve of “knowledge”:

In pursuit of social compliance, we are told to believe things that are known to be false.

With clever enough spin, people accept them and become worse than ignorant.

So there’s a kind of race between “knowledge” and “anti-knowledge”.

One reason this resonated with me is that it seemed like a different angle on one of my own favourite metaphors for the challenges of the next 15-30 years – the metaphor of a dramatic race:
Race

  • One runner in the race is “increasing rationality, innovation, and collaboration”; if this runner wins, the race ends in a positive singularity
  • The other runner in the race is “increasing complexity, rapidly diminishing resources”; if this runner wins, the race ends in a negative singularity.

In the light of Ian’s analysis, I can see that the second runner is aided by the increase of anti-knowledge: over-attachment to magical, simplistic, ultimately misleading worldviews.

However, it’s one thing to agree that “anti-knowledge” is a significant factor in determining the future; it’s another thing to agree which sets of ideas count as knowledge, and which as anti-knowledge! One of Ian’s slides included the following list of “religion substitutes”:

Animal rights, political correctness, pacifism, vegetarianism, fitness, warmism, environmentalism, anti-capitalism

It’s no wonder that many of the audience felt offended. Why list “warmism” (a belief in human-caused global warming), but not “denialism” (denial of human-caused global warming? Why list “anti-capitalism” but not “free market fundamentalism”? Why list “pacifism” but not “militarism”?

One online comment made a shrewd observation:

Ian raised my curiosity about ‘false beliefs’ (or nonsense beliefs as Ian calls them) as I ‘believe’ we all inhabit different belief systems – so what is true for one person may be false for another… at that exact moment in time.

And things can change. Once upon a time, it was a nonsense belief that the world was round.

There may be 15% of truth in some nonsense beliefs…or possibly even 85% truth. Taking ‘alternative medicine’ as an example of one of Ian’s nonsense beliefs – what if two of the many reasons it was considered nonsense were that (1) it is outside the world (the system) of science and technology and (2) it cannot be controlled by the pharmaceutical companies (perhaps our high priests of today)?

(5) The role of corporations and politicians in the approach to the singularity

One place where the right-wing / left-wing division becomes more acute in the question of whether anything special needs to be done to control the behaviour of corporations (businesses).

One of Ian’s strong positive recommendations, at the end of his presentation, was that scientists and engineers should become more actively involved in educating the general public about issues of technology. Shortly afterward, the question came from the floor: what about actions to educate or control corporations? Ian replied that he had very little to recommend to corporations, over and above his recommendations to the individuals within these corporations.

My own view is different. From my life inside industry, I’ve seen numerous cases of good people who are significantly constrained in their actions by the company systems and metrics in which they find themselves enmeshed.

Indeed, just as people should be alarmed about the prospects of super-AIs gaining too much power, over and above the humans who created them, we should also be alarmed about the powers that super-corporations are accumulating, over and above the powers and intentions of their employees.

The argument to leave corporations alone finds its roots in ideologies of freedom: government regulation of corporations often has undesirable side-effects. Nevertheless, that’s just an argument for being smarter and more effective in how the regulation works – not an argument to abstain from regulation altogether.

The question of the appropriate forms of collaborative governance remains one of the really hard issues facing anyone concerned about the future. Leaving corporations to find their own best solutions is, in my view, very unlikely to be the optimum approach.

In terms of how “laissez-faire” we should be, in the face of potential apocalypse down the road, I agree with the assessment near the end of Jeremy Green’s blogpost:

Pearson’s closing assertion that in the end our politicians will always wake up and pull us back from the brink of any disaster is belied by many examples of civilisations that did not pull back and went right over the edge to destruction.

Endnote:

After the presentation in Birkbeck College ended, around 40-50 of the audience regrouped in a nearby pub, to continue the discussion. The discussion is also continuing, at a different tempo, in the online pages of the London Futurists meetup. Ian Pearson deserves hearty congratulation for stirring up what has turned out to be an enlightening discussion – even though there’s heat in the comments as well as light!

Evidently, the discussion is far from complete…

20 February 2013

The world’s most eminent sociologist highlights the technological singularity

It’s not every day that the world’s most eminent sociologist reveals himself as having an intense interest in the Technological Singularity, and urges that “Everyone should read the books of Ray Kurzweil”. That’s what happened this evening.

The speaker in question was Lord Anthony Giddens, one of whose many claims to fame is his description as “Tony Blair’s guru”.

His biography states that, “According to Google Scholar, he is the most widely cited sociologist in the world today.”

In support of that claim, a 2009 article in the Times Higher Education supplement notes the following:

Giddens trumps Marx…

A list published today by Times Higher Education reveals the most-cited academic authors of books in the humanities…

As one of the world’s pre-eminent sociologists, Anthony Giddens, the Labour peer and former director of the London School of Economics, will be used to academic accolades.

But even he may be pleased to hear that his books are cited more often than those of iconic thinkers such as Sigmund Freud and Karl Marx.

Lord Giddens, now emeritus professor at LSE and a life fellow at King’s College, Cambridge, is the fifth most-referenced author of books in the humanities, according to the list produced by scientific data analysts Thomson Reuters.

The only living scholar ranked higher is Albert Bandura, the Canadian psychologist and pioneer of social learning theory at Stanford University…

Freud enters the list in 11th place. The American linguist and philosopher Noam Chomsky, who is based at the Massachusetts Institute of Technology and whose political books have a broader readership than some of his peers in the list, is 15th…

Lord Giddens is now 75 years old. Earlier this evening, I saw for myself evidence of his remarkable calibre. He gave an hour-long lecture in front of a packed audience at the London School of Economics, without any notes or slides, and without any hesitation, deviation, or verbal infelicity. Throughout, his remarks bristled with compelling ideas. He was equally competent – and equally fluent – when it came to the question-and-answer portion of the event.

LSE Events

The lecture was entitled “Off the edge of history: the world in the 21st century”. From its description on the LSE website, I had already identified it as relevant to many of the themes that I seek to have discussed in the series of London Futurists meetups that I chair:

The risks we face, and the opportunities we have, in the 21st century are in many respects quite different from those experienced in earlier periods of history. How should we analyse and respond to such a world? What is a rational balance of optimism and pessimism? How can we plan for a future that seems to elude our grasp and in some ways is imponderable?

As the lecture proceeded, I was very pleasantly impressed by the sequence of ideas. I append here a lightly edited copy of the verbatim notes I took on my Psion Series 5mx, supplemented by a few additions from the #LSEGiddens tweet stream. Added afterwards: the LSE has made a podcast available of the talk.

My rough notes from the talk follow… (text in italics are my parenthetical comments)

This large lecture room is completely full, twenty minutes before the lecture is due to start. I’m glad I arrived early!

Today’s topic is work in progress – he’s writing a book on the same topic, “Off the edge of history”.

  • Note this is a very different thesis from “the end of history”.

His starting point is in the subject of geology – a long way from sociology. He’s been working on climate change for the last seven years. It’s his first time to work so closely with scientists.

Geologists tend to call the present age “the Holocene age” – the last 12,000 years. But a geologist called Paul Crutzen recommended that we should use a different term for the last 200 years or so – we’re now in the Anthropocene age:

  • In this period, human activity strongly influences nature and the environment
  • This re-orients and restructures the world of geology
  • A great deal of what used to be natural, is natural no longer
  • Human beings are invading nature, in a way that has no precedent
  • Even some apparently natural catastrophes, like tsunamis and volcanoes, might be linked to impacts from humans.

We have continuities from previous history (of course), but so many things are different nowadays. One example is the impacts of new forms of biological threat. Disease organisms have skipped from animals to human beings. New disease organisms are being synthesised.

There are threats facing us, which are in no ways extensions of previous threats.

For example, what is the Internet doing to the world? Is it a gigantic new mind? Are you using the mobile phone, or is the mobile phone using you? There’s no parallel from previous periods. Globally connected electronic communications are fundamentally different from what went before.

When you are dealing with risks you’ve never experienced before, you can’t measure them. You’ll only know for sure when it’s too late. We’re on the edge of history because we are dealing with risks we have never faced before.

Just as we are invading nature, we are invading human nature in a way that’s unprecedented.

Do you know about the Singularity? (A smattering of people in the audience raise their hands.) It’s mind-blowing. You should find out about it:

  • It’s based on a mathematical concept
  • It’s accelerating processes of growth, rapidly disappearing to a far off point very different from today.

Everyone should read the books of Ray Kurzweil – who has recently become an Engineering Director at Google.

Kurzweil’s book makes it clear that:

  • Within our lifetimes, human beings will no longer be human beings
  • There are multiple accelerating rates of change in several different disciplines
  • The three main disciplines contributing to the singularity are nanotech, AI, and biotech
  • All are transforming our understanding of the human body and, more importantly, the human mind
  • This is described by the “Law of accelerating returns”
  • Progress is not just linear but geometrical.

This book opens our minds to multiple possibilities of what it means to be human, as technology penetrates us.

Nanotech is like humans playing God:

  • It’s a level below DNA
  • We can use it to rebuild many parts of the human body, and other artefacts in the world.

Kurzweil states that human beings will develop intelligence which is 100x higher than at present:

  • Because of merging of human bodies with computers
  • Because of the impact of nanotech.

Kurzweil gives this advice: if you are relatively young: live long, in order to live forever:

  • Immortality is no longer a religious concept, it’s now a tangible prospect
  • It could happen in the next 20-40 years.

This is a fantastic expansion of what it means to be human. Importantly, it’s a spread of opportunities and risk.

These were religious notions before. Now we have the real possibility of apocalypse – we’ve had it since the 1950s, when the first thermonuclear weapons were invented. The possibility of immortality has become real too.

We don’t know how to chart these possibilities. None of us know how to fill in that gap.

What science fiction writers were writing 20 years ago, is now in the newspapers everyday. Reading from the Guardian from a couple of days ago:

Paralysed people could get movement back through thought control

Brain implant could allow people to ‘feel’ the presence of infrared light and one day be used to move artificial limbs

Scientists have moved closer to allowing paralysed people to control artificial limbs with their thoughts following a breakthrough in technology…

…part of a series of sessions on advances in brain-machine interfaces, at which other scientists presented a bionic hand that could connect directly to the nerves in a person’s arm and provide sensory feedback of what they were holding.

Until now, neurological prosthetics have largely been demonstrated as a way to restore a loss of function. Last year, a 58-year-old woman who had become paralysed after a stroke demonstrated that she could use a robotic arm to bring a cup of coffee to her mouth and take a sip, just by thinking about it…

In the future…  it might be possible to use prosthetic devices to restore vision – for example, if a person’s visual cortex had been damaged – by training a different part of the brain to process the information.

Or you could even augment normal brain function in non-invasive ways to deliver the information.

We could learn to detect other sorts of signals that we normally don’t see or experience; the perceptual range could increase.

These things are real; these things are happening. There is a kind of geometric advance.

The literature of social scientists has a big division here, between doomsday thinkers and optimists, with respected thinkers in both camps.

Sir Martin Rees is example of first category. He wrote a book called “Our final century”:

  • It examines forms of risk that could destroy our society
  • Climate change is a huge existential risk – most people aren’t aware of it
  • Nanotech is another existential risk – grey goo scenario
  • We also have lots of weaponry: drones circulating above the world even as we speak
  • Most previous civilisations have ended in disaster – they subverted themselves
  • For the first time, we have a civilisation on a global scale
  • It could well be our final century.

Optimists include Matt Ridley, a businessman turned scientist, and author of the book “The rational optimist”:

  • Over the course of human civilisation there is progress – including progress in culture, and medical advances.

This is a big division. How do we sort this out? His view: it’s not possible to decide. We need to recognise that we live in a “high opportunity, high risk society”:

  • The level of opportunity and level of risk are both much higher than before
  • But risk and opportunity always intertwine
  • “In every risk there’s an opportunity…” and vice versa
  • We must be aware of the twists and tangles of risk and opportunity – their interpenetration.

Studying this area has led him to change some of his views from before:

  • He now sees the goal of sustainability as a harder thing than before
  • Living within our limits makes sense, but we no longer know what our limits are
  • We have to respect limits, but also recognise that limits can be changed.

For example, could we regard a world population of 9 billion people as an opportunity, rather than just a risk?

  • It would lead us to put lots more focus on food innovation, blue sky tech for agriculture, social reform, etc – all good things.

A few points to help us sort things out:

  1. One must never avoid risk – we live in a world subject to extreme system risk; we mustn’t live in denial of risk in our personal life (like denying the risks of smoking or riding motor cycles) or at an civilisational level
  2. We have to think about the future in a very different way, because the future has become opaque to us; the enlightenment thought was that we would march in and make sense of history (Marx had similar thoughts), but it turns out that the future is actually opaque – for our personal lives too as well as society (he wonders whether the EU will still exist by the time he finishes his book on the future of the EU!)
  3. We’ll have to learn to backcast rather than forecast – to borrow an idea from the study of climate change. We have to think ahead, and then think back.

This project is the grand task of social sciences in the 21st century.

One more example: the possibility of re-shoring of jobs in the US and EU:

  • 3D printing is an unbelievable technological invention
  • 3D printers can already print shoes
  • A printer in an MIT lab can print whole systems – eg in due course a plane which will fly directly out of the computer
  • This will likely produce a revolution in manufacturing – many, many implications.

Final rhetorical question: As we confront this world, should we be pessimists or optimists? This is the same question he used to consider, at the end of the talks he used to give on climate change.

His answer: we should bracket out that opposition; it’s much more important to be rational than either pessimist or optimist:

  • Compare the case of someone with very serious cancer – they need more than wishful thinking. Need rational underpinning of optimism and/or pessimism.

Resounding applause from the audience. Then commence questions and answers.

Q: Are today’s governance structures, at local and national levels, fit to deal with these issues?

A: No. For example, the he European Union has proved not to be the vanguard of global governance that we hoped it would be. Climate change is another clear example: twenty years of UN meetings with no useful outcome whatsoever.

Q: Are our human cognitive powers capable to deal with these problems? Is there a role for technology to assist our cognitive powers?

A: Our human powers are facing a pretty difficult challenge. It’s human nature to put off what we don’t have to do today. Like 16 years taking up smoking who can’t really see themselves being 40. Maybe a supermind might be more effective.

Q: Although he has given examples where current governance models are failing, are there any bright spots of hope for governance? (The questioner in this case was me.)

A: There are some hopeful signs for economic governance. Surely bankers will not get away with what they’ve done. Movement to address tax havens (“onslaught”) – bring the money back as well as bringing the jobs back. Will require global co-operation. Nuclear proliferation (Iran, Israel) is as dangerous as climate change. The international community has done quite well with non-proliferation, but it only takes one nuclear war for things to go terribly wrong.

Q: What practical advice would he give to the Prime Minister (or to Ed Miliband)?

A: He supports Ed Miliband trying to restructure capitalism; there are similar moves happening in the US too. However, with global issues like these, any individual prime minister is limited in his influence. For better or for worse, Ray Kurzweil has more influence than any politician!

(Which is a remarkable thing to say, for someone who used to work so closely with Prime Minister Tony Blair…)

16 June 2012

Beyond future shock

Filed under: alienation, books, change, chaos, futurist, Humanity Plus, rejuveneering, robots, Singularity, UKH+ — David Wood @ 3:10 pm

They predicted the “electronic frontier” of the Internet, Prozac, YouTube, cloning, home-schooling, the self-induced paralysis of too many choices, instant celebrities, and the end of blue-collar manufacturing. Not bad for 1970.

That’s the summary, with the benefit of four decades of hindsight, given by Fast Company writer Greg Lindsay, of the forecasts made in the 1970 bestseller “Future Shock” by husband-and-wife authors Alvin and Heidi Toffler.

As Lindsay comments,

Published in 1970, Future Shock made its author Alvin Toffler – a former student radical, welder, newspaper report and Fortune editor – a household name. Written with his wife (and uncredited co-author), Heidi Toffler, the book was The World Is Flat of its day, selling 6 million copies and single-handedly inventing futurism…

“Future shock is the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time”, the pair wrote.

And quoting Deborah Westphal, the managing partner of Toffler Associates, in an interview at an event marking the 40th anniversary of the publication of Future Shock, Lindsay notes the following:

In Future Shock, the Tofflers hammered home the point that technology, culture, and even life itself was evolving too fast for governments, policy-makers and regulators to keep up. Forty years on, that message hasn’t changed. “The government needs to understand the dependencies and the convergence of networks through information,” says Westphal. “And there still needs to be some studies done around rates of change and the synchronization of these systems. Business, government, and organizational structures need to be looked at and redone. We’ve built much of the world economy on an industrial model, and that model doesn’t work in an information-centric society. That’s probably the greatest challenge we still face -understanding the old rules don’t apply for the future.”

Earlier this week, another book was published, that also draws on Future Shock for inspiration.  Again, the authors are a husband-and-wife team, Parag and Ayesha Khanna.  And again, the book looks set to redefine key aspects of the futurist endeavour.

This new book is entitled “Hybrid Reality: Thriving in the Emerging Human-Technology Civilization“.  The Khannas refer early on to the insights expressed by the Tofflers in Future Shock:

The Tofflers’ most fundamental insight was that the pace of change has become as important as the content of change… The term Future Shock was thus meant to capture our intense anxiety in the face of technology’s seeming ability to accelerate time. In this sense, technology’s true impact isn’t just physical or economic, but social and psychological as well.

One simple but important example follows:

Technologies such as mobile phones can make us feel empowered, but also make us vulnerable to new pathologies like nomophobia – the fear of being away from one’s mobile phone. Fifty-eight percent of millennials would rather give up their sense of smell than their mobile phone.

As befits the theme of speed, the book is a fast read. I downloaded it onto my Kindle on the day of its publication, and have already read it all the way through twice. It’s short, but condensed. The text contains many striking turns of phrase, loaded with several layers of meaning, which repay several rethinks. That’s the best kind of sound-bite.

Despite its short length, there are too many big themes in the book for me to properly summarise them here. The book portrays an optimistic vision, alongside a series of challenges and risks. As illustrations, let me pick out a selection of phrases, to convey some of the flavour:

The cross-pollination of leading-edge sectors such as information technology, biotechnology, pervasive computing, robotics, neuroscience, and nanotechnology spells the end of certain turf wars over nomenclature. It is neither the “Bio Age” nor the “Nano Age” nor the “Neuro Age”, but the hybrid of all of these at the same time…

Our own relationship to technology is moving beyond the instrumental to the existential. There is an accelerating centripetal dance between what technologies are doing outside us and inside us. Externally, technology no longer simply processes our instructions on a one-way street. Instead, it increasingly provides intelligent feedback. Internally, we are moving beyond using technology only to dominate nature towards making ourselves the template for technology, integrating technologies within ourselves physically. We don’t just use technology; we absorb it

The Hybrid Age is the transition period between the Information Age and the moment of Singularity (when machine surpass human intelligence) that inventor Ray Kurzweil estimates we may reach by 2040 (perhaps sooner). The Hybrid Age is a liminal phase in which we cross the threshold toward a new mode of arranging global society…

You may continue to live your life without understanding the implications of the still-distant Singularity, but you should not underestimate how quickly we are accelerating into the Hybrid Age – nor delay in managing this transition yourself

The dominant paradigm to explain global change in the Hybrid Age will be geotechnnology. Technology’s role in shaping and reshaping the prevailing order, and accelerating change between orders, forces us to rethink the intellectual hegemony of geopolitics and geoeconomics…

It is geotechnology that is the underlying driver of both: Mastery in the leading technology sectors of any era determines who leads in geoeconomics and dominates in geopolitics…

The shift towards a geotechnology paradigm forces us to jettison centuries of foundational assumptions of geopolitics. The first is our view on scale: “Bigger is better” is no longer necessarily true. Size can be as much a liability as an asset…

We live and die by our Technik, the capacity to harness emerging technologies to improve our circumstances…

We will increasingly differentiate societies on the basis not of their regime type or income, but of their capacity to harness technology. Societies that continuously upgrade their Technik will thrive…

Meeting the grand challenge of improving equity on a crowded planet requires spreading Technik more than it requires spreading democracy

And there’s lots more, applying the above themes to education, healthcare, “better than new” prosthetics, longevity and rejuvenation, 3D printing, digital currencies, personal entrepreneurship and workforce transformation, the diffusion of authority, the rise of smart cities and their empowered “city-zens”, augmented reality and enhanced personal avatars, robots and “avoiding robopocalypse”, and the prospect for a forthcoming “Pax Technologica”.

It makes me breathless just remembering all these themes – and how they time and again circle back on each other.

Footnote: Readers who are in the vicinity of London next Saturday (23rd June) are encouraged to attend the London Futurist / Humanity+ UK event “Hybrid Reality, with Ayesha Khanna”. Click on the links for more information.

Older Posts »

Blog at WordPress.com.