dw2

29 May 2025

Governance of the transition to AGI: Time to act

As reported yesterday by The Millennium Project, the final report has been released by a high-level expert panel, convened by the UN Council of Presidents of the General Assembly (UNCPGA), on the subject of Artificial General Intelligence (AGI). The report is titled “Governance of the Transition to Artificial General Intelligence (AGI): Urgent Considerations for the UN General Assembly”. It’s well worth reading!

About the UNCPGA

What’s the UNCPGA, you may ask.

Founded in 1992, this Council consists of all former Presidents of the UN General Assembly. I think of it as akin to the House of Lords in the UK, where former members of the House of Commons often display more wisdom and objectivity than when they were embedded in the yah-boo tribal politics of day-to-day government and opposition. These former Presidents hold annual meetings to determine how they can best advance the goals of the UN and support the Office of the current President of the UNGA.

At their 2024 meeting in Seoul, the UNCPGA decided that a global panel of experts on AGI should be convened. Here’s an extract from the agreement reached at that meeting:

The Seoul Declaration 2024 of the UNCPGA calls for a panel of artificial general intelligence (AGI) experts to provide a framework and guidelines for the UN General Assembly to consider in addressing the urgent issues of the transition to artificial general intelligence (AGI).

This work should build on and avoid duplicating the extensive efforts on AI values and principles by UNESCO, OECD, G20, G7, Global Partnership on AI, and Bletchley Declaration, and the recommendations of the UN Secretary-General’s High-Level Advisory Body on AI, UN Global Digital Compact, the International Network of AI Safety Institutes, European Council’s Framework Convention on AI and the two UN General Assembly Resolutions on AI. These have focused more on narrower forms of AI. There is currently a lack of similar attention to AGI.

AI is well known to the world today and often used but AGI is not and does not exist yet. Many AGI experts believe it could be achieved within 1-5 years and eventually could evolve into an artificial super intelligence beyond our control. There is no universally accepted definition of AGI, but most AGI experts agree it would be a general-purpose AI that can learn, edit its code, and act autonomously to address many novel problems with novel solutions similar to or beyond human abilities. Current AI does not have these capabilities, but the trajectory of technical advances clearly points in that direction…

The report should identify the risks, threats, and opportunities of AGI. It should focus on raising awareness of mobilizing the UN General Assembly to address AGI governance in a more systematic manner. It is to focus on AGI that has not yet been achieved, rather than current forms of more narrow AI systems. It should stress the urgency of addressing AGI issues as soon as possible considering the rapid developments of AGI, which may present serious risks to humanity as well as extraordinary benefits to humanity.

The panel was duly formed, with the following participants:

  • Jerome Glenn (USA), Chair
  • Renan Araujo (Brazil)
  • Yoshua Bengio (Canada)
  • Joon Ho Kwak (Republic of Korea)
  • Lan Xue (China)
  • Stuart Russell (UK and USA)
  • Jaan Tallinn (Estonia)
  • Mariana Todorova (Bulgaria)
  • José Jaime Villalobos (Costa Rica)

(For biographical details of the participants, the mandate they were given following the Seoul event, and the actual report they delivered, click here.)

The panel was tasked with preparing and delivering its report at the 2025 gathering of the UNCPGA, which took place in April in Bratislava. Following a positive reception at that event, the report is now being made public.

Consequences if no action is taken

The report contains the following headline: “Urgency for UN General Assembly action on AGI governance and likely consequences if no action is taken“:

Amidst the complex geopolitical environment and in the absence of cohesive and binding international norms, a competitive rush to develop AGI without adequate safety measures is increasing the risk of accidents or misuse, weaponization, and existential failures. Nations and corporations are prioritizing speed over security, undermining national governing frameworks, and making safety protocols secondary to economic or military advantage. Since many forms of AGI from governments and corporations could emerge before the end of this decade, and since establishing national and international governance systems will take years, it is urgent to begin the necessary procedures to prevent the following outcomes…

The report lists the following six outcomes, that urgently require action to avoid:

1. Irreversible Consequences—Once AGI is achieved, its impact may be irreversible. With many frontier forms of AI already showing deceptive and self-preservation behavior, and the push towards more autonomous, interacting, self-improving AIs integrated with infrastructures, the impacts and trajectory of AGI can plausibly end up being uncontrollable. If that happens, there may be no way to return to a state of reliable human oversight. Proactive governance is essential to ensure that AGI will not cross our red lines, leading to uncontrollable systems with no clear way to return to human control.

2. Weapons of Mass Destruction—AGI could enable some states and malicious non-state actors to build chemical, biological, radiological, and nuclear weapons. Moreover, large, AGI-controlled swarms of lethal autonomous weapons could themselves constitute a new category of WMDs.

3. Critical Infrastructure Vulnerabilities—Critical national systems (e.g., energy grids, financial systems, transportation networks, communication infrastructure, and healthcare systems) could be subject to powerful cyberattacks launched by or with the aid of AGI. Without national deterrence and international coordination, malicious non-state actors from terrorists to transnational organized crime could conduct attacks at a large scale.

4. Power Concentration, Global Inequality, and Instability—Uncontrolled AGI development and usage could exacerbate wealth and power disparities on an unprecedented scale. If AGI remains in the hands of a few nations, corporations, or elite groups, it could entrench economic dominance and create global monopolies over intelligence, innovation, and industrial production. This could lead to massive unemployment, widespread disempowerment affecting legal underpinnings, loss of privacy, and collapse of trust in institutions, scientific knowledge, and governance. It could undermine democratic institutions through persuasion, manipulation, and AI-generated propaganda, and heighten geopolitical instability in ways that increase systemic vulnerabilities. A lack of coordination could result in conflicts over AGI resources, capabilities, or control, potentially escalating into warfare. AGI will stress existing legal frameworks: many new and complex issues of intellectual property, liability, human rights, and sovereignty could overwhelm domestic and international legal systems.

5. Existential Risks—AGI could be misused to create mass harm or developed in ways that are misaligned with human values; it could even act autonomously beyond human oversight, evolving its own objectives according to self-preservation goals already observed in current frontier AIs. AGI might also seek power as a means to ensure it can execute whatever objectives it determines, regardless of human intervention. National governments, leading experts, and the companies developing AGI have all stated that these trends could lead to scenarios in which AGI systems seek to overpower humans. These are not far-fetched science fiction hypotheticals about the distant future—many leading experts consider that these risks could all materialize within this decade, and their precursors are already occurring. Moreover, leading AI developers have no viable proposal so far for preventing these risks with high confidence.

6. Loss of Extraordinary Future Benefits for All of Humanity—Properly managed AGI promises improvements in all fields, for all peoples, from personalized medicine, curing cancer, and cell regeneration, to individualized learning systems, ending poverty, addressing climate change, and accelerating scientific discoveries with unimaginable benefits. Ensuring such a magnificent future for all requires global governance, which begins with improved global awareness of both the risks and benefits. The United Nations is critical to this mission.

In case you think these scenarios are unfounded fantasies, I encourage you to read the report itself, where the experts provide references for further reading.

The purpose envisioned for UN governance

Having set out the challenges, the report proceeds to propose the purpose to be achieved by UN governance of the transition to AGI:

Given that AGI might well be developed within this decade, it is both scientifically and ethically imperative that we build robust governance structures to prepare both for the extraordinary benefits and extraordinary risks it could entail.

The purpose of UN governance in the transition to AGI is to ensure that AGI development and usage are aligned with global human values, security, and development. This involves:

1) Advancing AI alignment and control research to identify technical methods for steering and/or controlling increasingly capable AI systems;

2) Providing guidance for the development of AGI—establishing frameworks to ensure AGI is developed responsibly, with robust security measures, transparency, and in alignment with human values;

3) Developing governance frameworks for the deployment and use of AGI—preventing misuse, ensuring equitable access, and maximizing its benefits for humanity while minimizing risks;

4) Fostering future visions of beneficial AGI—new frameworks for social, environmental, and economic development; and

5) Providing a neutral, inclusive platform for international cooperation—setting global standards, building an international legal framework, and creating incentives for compliance; thereby, fostering trust among nations to guarantee global access to the benefits of AGI.

Actions recommended

The report proceeds to offer four recommendations for further consideration during a UN General Assembly session specifically on AGI:

A. Global AGI Observatory: A Global AGI Observatory is needed to track progress in AGI-relevant research and development and provide early warnings on AI security to Member States. This Observatory should leverage the expertise of other UN efforts such as the Independent International Scientific Panel on AI created by the Global Digital Compact and the UNESCO Readiness Assessment Methodology.

B. International System of Best Practices and Certification for Secure and Trustworthy AGI: Given that AGI might well be developed within this decade, it is both scientifically and ethically imperative that we build robust governance structures to prepare both for the extraordinary benefits and extraordinary risks it could entail.

C. UN Framework Convention on AGI: A Framework Convention on AGI is needed to establish shared objectives and flexible protocols to manage AGI risks and ensure equitable global benefit distribution. It should define clear risk tiers requiring proportionate international action, from standard-setting and licensing regimes to joint research facilities for higher-risk AGI, and red lines or tripwires on AGI development. A Convention would provide the adaptable institutional foundation essential for globally legitimate, inclusive, and effective AGI governance, minimizing global risks and maximizing global prosperity from AGI.

D. Feasibility Study on a UN AGI Agency: Given the breadth of measures required to prepare for AGI and the urgency of the issue, steps are needed to investigate the feasibility of a UN agency on AGI, ideally in an expedited process. Something like the IAEA has been suggested, understanding that AGI governance is far more complex than nuclear energy; and hence, requiring unique considerations in such a feasibility study.

What happens next

I’m on record as being pessimistic that the UNGA will ever pay sufficient attention to the challenges of governing the transition to AGI. (See the section “The collapse of cooperation is nigh” in this recent essay of mine.)

But I’m also on record as seeing optimistic scenarios too, in which humanity “chooses cooperation, not chaos”.

What determines whether international bodies such as the UN will take sufficient action – or whether, instead, insightful reports are left to gather dust as the body focuses on virtue signalling?

There are many answers to that question, but for now, I’ll say just this. It’s up to you. And to me. And to all of us.

That is, each of us has the responsibility to reach out, directly or indirectly, to the teams informing the participants at the UN General Assembly. In other words, it’s up to us to find ways to catch the attention of the foreign ministry in our countries, so that they demand:

  • Adequate timetabling at the UNGA for the kind of discussion that the UNCPGA report recommends
  • Appropriate follow-up: actions, not just words

That may sound daunting, but a fine piece of advice has recently been shared online by Leticia García Martínez, Policy Advisor at ControlAI. Her article is titled “What We Learned from Briefing 70+ Lawmakers on the Threat from AI” and I recommend that you read it carefully. It is full of pragmatic suggestions that are grounded in recent experience.

ControlAI are gathering signatures on a short petition:

Nobel Prize winners, AI scientists, and CEOs of leading AI companies have stated that mitigating the risk of extinction from AI should be a global priority.

Specialised AIs – such as those advancing science and medicine – boost growth, innovation, and public services. Superintelligent AI systems would compromise national and global security.

The UK can secure the benefits and mitigate the risks of AI by delivering on its promise to introduce binding regulation on the most powerful AI systems.

Happily, this petition has good alignment with the report to the UNCPGA:

  • Support for the remarkable benefits possible from AI
  • Warnings about the special risks from AGI or superintelligent AI
  • A determination to introduce binding regulation.

New politicians continue to be added to their campaign webpage as supporters of this petition.

The next thing that needs to happen in the UK parliament is that their APPG (All Party Parliamentary Group) on AI need to devote sufficient time to AGI / superintelligence. Regrettably, up till now, they’ve far too often sidestepped that issue, focussing instead of issues of today’s AI, rather than the supercharged issues of AGI. Frankly, it’s a failure of vision, and a prevalence of groupthink.

Hopefully, as the advisors to the APPG-AI read the UNCPGA report, they’ll be jolted out of their complacency.

It’s time to act. Now.

Postscript: Jerome Glenn visiting London

Jerome (Jerry) Glenn, the chair of the expert panel that produced this report, and who is also the founder and executive director of the Millennium Project, will be visiting London on the weekend of Saturday 14th June.

There will be a number of chances for people in and around London to join discussions with Jerry. That includes a session from 2pm to 4pm on that Saturday, “The Future of AI: Issues, Opportunities, and Geopolitical Synergies”, as well as a session in the morning “State of the Future 20.0”, and an open-ended discussion in the early evening, “The Future – Where Next?”.

For more details of these events, and to register to attend, click here.

Blog at WordPress.com.