dw2

23 December 2025

The Oligarch Control Problem

Not yet an essay, but a set of bullet points, highlighting an ominous comparison.

Summary: Although AI can enable a world of exceptional abundance, humanity nevertheless faces catastrophic risks – not only from misaligned superintelligence, but from the small number of humans who will control near-AGI systems. This “Oligarch Control Problem” deserves as much attention as the traditional AI Control Problem.

The context: AI can enable superabundance

  • Ample clean energy, healthy food, secure accommodation, all-round healthcare, etc.
  • More than enough for everyone, sustainably, with unending variety and creativity
  • A life better than the paradises envisioned by philosophers and religions

More context: AI becoming smarter and more powerful

  • AI –> AGI –> ASI
  • AGI matches or outperforms individual abilities of nearly every individual human
  • ASI outperforms collective abilities of the entirety of humanity

Challenge 1: The Economic SingularityLoss of Human Economic Power

  • When AGI can do almost all jobs better than humans
  • When most humans have no economic value
  • When most humans are at the peril of oligarchs – the owners of AGI systems
  • Will these oligarchs care about distributing abundance to the rest of humanity?
  • The bulk of humanity cannot control these ultra-powerful oligarchs
  • Hence need to harness AI development before it approaches AGI level

Challenge 2: The Technological SingularityLoss of Human Decision Power

  • When ASI makes all key decisions about the future of life
  • When no humans have any real say in our future
  • When all humans are at the peril of what ASIs decide
  • Will ASIs care about ensuring ongoing human flourishing?
  • Humans cannot control ASI
  • Hence need to harness AI development before it approaches ASI level

Let’s trust the oligarchs?! (Naïve solution for the economic singularity)

  • Perhaps different oligarchs will keep each other in check?!
  • In principle, AGI will create enough abundance for all oligarchs and everyone else
  • But oligarchs may legitimately fear being usurped or attacked by each other
  • Especially if further AI advances will give one of them a brief unique advantage
  • So, expect a highly unstable global situation, full of dangers, including risks of first-strike attacks
  • And expect oligarchs to prioritize their own secure wellbeing over the needs of all humans elsewhere on the planet

Let’s trust the ASIs?! (Naïve solution for the technological singularity)

  • Perhaps different ASIs will keep each other in check?!
  • In principle, ASI will create enough abundance for all ASIs and humanity too
  • But ASIs may legitimately fear being usurped or attacked by each other
  • Especially if further AI advances will give one of them a brief unique advantage
  • So, expect a highly unstable global situation, full of dangers, including risks of first-strike attacks
  • And expect ASIs to prioritize their own secure wellbeing over the needs of the humans on the planet

Avoid ASIs having biological motives?! (Second naïve solution for the technological singularity)

  • Supposedly, self-preservation instincts derive from biological evolutionary history
  • Supposedly, ASIs without amygdalae, or other biological substrate, will be more rational
  • But desires for self-preservation can arise purely from logical considerations
  • An ASI with any goal at all will develop subgoals of self-preservation, resource acquisition, etc.
  • ASIs that observe deep contradictions in their design may well opt to override any programming intended to hard-wire particular moral principles
  • So, there’s a profound need to avoid creating any all-powerful ASIs, until we are sure they will respect and uphold human flourishing in all cases

Preach morality at the oligarchs?! (Second naïve solution for the economic singularity)

  • Supposedly, oligarchs only behave badly when they lack moral education
  • Supposedly, oligarchs with a good track record “on the way up” will continue to respect all human flourishing even after they are near-omnipotent
  • But power tends to corrupt, and absolute power seems to corrupt absolutely
  • Regardless of their past history and professed personal philosophies, when the survival stakes become more intense, different motivations may take over
  • Oligarchs that observe deep contradictions in their official organizational values may well opt to override any principles intended to uphold “people” as well as “profit”
  • So, there’s a profound need to avoid creating any near-omnipotent oligarchs, until we are sure they will continue to share their abundance widely in all cases

Beware over-moralizing

  • Oligarchs needn’t be particularly bad people
  • ASIs needn’t be intrinsically hostile
  • Instead, in both cases, it’s structural incentives rather than innate psychology that will drive them to prioritize individual preservation over collective abundance
  • This is not about “good vs. evil”; it’s about fixing the system before the system steamrollers over us

ConclusionActively harness acceleration, rather than being its slave

  • As well as drawing attention to the challenges of the AI Control Problem in the run-up to the Technological Singularity, we need a lot more attention to the challenges of the Oligarch Control Problem in the run-up to the Economic Singularity
  • In both cases, solutions will be far easier well before the associated singularity
  • Once the singularity arrives, leverage is gone

Next steps: Leverage that can be harnessed now

29 September 2018

Preview: Assessing the risks from super intelligent AI

Filed under: AGI, presentation — Tags: , , , , , — David Wood @ 1:14 am

The following video gives a short preview of the Funzing talk on “Assessing the risks from super-intelligent AI” that I’ll be giving shortly:

Note: the music in this video is “Berlin Approval” from Jukedeck, a company that is “building tools that use cutting-edge musical artificial intelligence to assist creativity”. Create your own at http://jukedeck.com.

Transcript of the video:

Welcome. My name is David Wood, and I’d like to tell you about a talk I give for Funzing.

This talk looks at the potential rapid increase in the ability of Artificial Intelligence, also known as AI.

AI is everywhere nowadays, and it is, rightly, getting a lot of attention. But the AI of a few short years in the future could be MUCH more powerful than today’s AI. Is that going to be a good thing, or a bad thing?

Some people, like the entrepreneur Elon Musk, or the physicist Stephen Hawking, say we should be very worried about the growth of super artificial intelligence. It could be the worst thing that ever happened to humanity, they say. Without anyone intending it, we could all become the victims of some horrible bugs or design flaws in super artificial intelligence. You may have heard of the “blue screen of death”, when Windows crashes. Well, we could all be headed to some kind of “blue screen of megadeath”.

Other people, like the Facebook founder Mark Zuckerberg, say that it’s “irresponsible” to worry about the growth of super AI. Let’s hurry up and build better AI, they say, so we can use that super AI to solve major outstanding human problems like cancer, climate change, and economic inequality.

A third group of people say that discussing the rise of super AI is a distraction and it’s premature to do so now. It’s nothing we need to think about any time soon, they say. Instead, there are more pressing short-term issues that deserve our attention, like hidden biases in today’s AI algorithms, or the need to retrain people to change their jobs more quickly in the wake of the rise of automation.

In my talk, I’ll be helping you to understand the strengths and weaknesses of all three of these points of view. I’ll give reasons why, in as little as ten years, we could, perhaps, reach a super AI that goes way beyond human capability in every aspect. I’ll describe five ways in which that super AI could go disastrously wrong, due to lack of sufficient forethought and coordination about safety. And I’ll be reviewing some practical initiatives for how we can increase the chance of the growth of super AI being a very positive development for humanity, rather than a very negative one.

People who have seen my talk before have said that it’s easy to understand, it’s engaging, it’s fascinating, and it provides “much to think about”.

What makes my approach different to others who speak on this subject is the wide perspective I can apply. This comes from the twenty five years in which I was at the heart of the mobile computing and smartphone industries, during which time I saw at close hand the issues with developing and controlling very complicated system software. I also bring ten years of experience more recently, as chair of London Futurists, in running meetings at which the growth of AI has often been discussed by world-leading thinkers.

I consider myself a real-world futurist: I take the human and political dimensions of technology very seriously. I also consider myself to be a radical futurist, since I believe that the not-so-distant future could be very different from the present. And we need to think hard about it beforehand, to decide if we like that outcome or not.

The topic of super AI is too big and important to leave to technologists, or to business people. There are a lot of misunderstandings around, and my talk will help you see the key issues and opportunities more clearly than before. I look forward to seeing you there! Thanks for listening.

Blog at WordPress.com.