dw2

23 December 2025

The Oligarch Control Problem

Not yet an essay, but a set of bullet points, highlighting an ominous comparison.

Summary: Although AI can enable a world of exceptional abundance, humanity nevertheless faces catastrophic risks – not only from misaligned superintelligence, but from the small number of humans who will control near-AGI systems. This “Oligarch Control Problem” deserves as much attention as the traditional AI Control Problem.

The context: AI can enable superabundance

  • Ample clean energy, healthy food, secure accommodation, all-round healthcare, etc.
  • More than enough for everyone, sustainably, with unending variety and creativity
  • A life better than the paradises envisioned by philosophers and religions

More context: AI becoming smarter and more powerful

  • AI –> AGI –> ASI
  • AGI matches or outperforms individual abilities of nearly every individual human
  • ASI outperforms collective abilities of the entirety of humanity

Challenge 1: The Economic SingularityLoss of Human Economic Power

  • When AGI can do almost all jobs better than humans
  • When most humans have no economic value
  • When most humans are at the peril of oligarchs – the owners of AGI systems
  • Will these oligarchs care about distributing abundance to the rest of humanity?
  • The bulk of humanity cannot control these ultra-powerful oligarchs
  • Hence need to harness AI development before it approaches AGI level

Challenge 2: The Technological SingularityLoss of Human Decision Power

  • When ASI makes all key decisions about the future of life
  • When no humans have any real say in our future
  • When all humans are at the peril of what ASIs decide
  • Will ASIs care about ensuring ongoing human flourishing?
  • Humans cannot control ASI
  • Hence need to harness AI development before it approaches ASI level

Let’s trust the oligarchs?! (Naïve solution for the economic singularity)

  • Perhaps different oligarchs will keep each other in check?!
  • In principle, AGI will create enough abundance for all oligarchs and everyone else
  • But oligarchs may legitimately fear being usurped or attacked by each other
  • Especially if further AI advances will give one of them a brief unique advantage
  • So, expect a highly unstable global situation, full of dangers, including risks of first-strike attacks
  • And expect oligarchs to prioritize their own secure wellbeing over the needs of all humans elsewhere on the planet

Let’s trust the ASIs?! (Naïve solution for the technological singularity)

  • Perhaps different ASIs will keep each other in check?!
  • In principle, ASI will create enough abundance for all ASIs and humanity too
  • But ASIs may legitimately fear being usurped or attacked by each other
  • Especially if further AI advances will give one of them a brief unique advantage
  • So, expect a highly unstable global situation, full of dangers, including risks of first-strike attacks
  • And expect ASIs to prioritize their own secure wellbeing over the needs of the humans on the planet

Avoid ASIs having biological motives?! (Second naïve solution for the technological singularity)

  • Supposedly, self-preservation instincts derive from biological evolutionary history
  • Supposedly, ASIs without amygdalae, or other biological substrate, will be more rational
  • But desires for self-preservation can arise purely from logical considerations
  • An ASI with any goal at all will develop subgoals of self-preservation, resource acquisition, etc.
  • ASIs that observe deep contradictions in their design may well opt to override any programming intended to hard-wire particular moral principles
  • So, there’s a profound need to avoid creating any all-powerful ASIs, until we are sure they will respect and uphold human flourishing in all cases

Preach morality at the oligarchs?! (Second naïve solution for the economic singularity)

  • Supposedly, oligarchs only behave badly when they lack moral education
  • Supposedly, oligarchs with a good track record “on the way up” will continue to respect all human flourishing even after they are near-omnipotent
  • But power tends to corrupt, and absolute power seems to corrupt absolutely
  • Regardless of their past history and professed personal philosophies, when the survival stakes become more intense, different motivations may take over
  • Oligarchs that observe deep contradictions in their official organizational values may well opt to override any principles intended to uphold “people” as well as “profit”
  • So, there’s a profound need to avoid creating any near-omnipotent oligarchs, until we are sure they will continue to share their abundance widely in all cases

Beware over-moralizing

  • Oligarchs needn’t be particularly bad people
  • ASIs needn’t be intrinsically hostile
  • Instead, in both cases, it’s structural incentives rather than innate psychology that will drive them to prioritize individual preservation over collective abundance
  • This is not about “good vs. evil”; it’s about fixing the system before the system steamrollers over us

ConclusionActively harness acceleration, rather than being its slave

  • As well as drawing attention to the challenges of the AI Control Problem in the run-up to the Technological Singularity, we need a lot more attention to the challenges of the Oligarch Control Problem in the run-up to the Economic Singularity
  • In both cases, solutions will be far easier well before the associated singularity
  • Once the singularity arrives, leverage is gone

Next steps: Leverage that can be harnessed now

Blog at WordPress.com.