dw2

6 November 2024

A bump on the road – but perhaps only a bump

Filed under: AGI, politics, risks — Tags: , , , — David Wood @ 3:56 pm

How will the return of Donald Trump to the US White House change humanity’s path toward safe transformative AI and sustainable superabundance?

Of course, the new US regime will make all kinds of things different. But at the macro level, arguably nothing fundamental changes. The tasks remain the same, for what engaged citizens can and should be doing.

At that macro level, the path toward safe sustainable superabundance runs roughly as follows. Powerful leaders, all around the world, need to appreciate that:

  1. For each of them, it is in their mutual self-interest to constrain the development and deployment of what could become catastrophically dangerous AI superintelligence
  2. The economic and humanitarian benefits that they each hope could be delivered by advanced AI, can in fact be delivered by AI which is restricted from having features of general intelligence; that is, utility AI is all that we need
  3. There are policy measures which can be adopted, around the world, to prevent the development and deployment of catastrophically dangerous AI superintelligence – for example, measures to control the spread and use of vast computing resources
  4. There are measures of monitoring and auditing which can also be adopted, around the world, to ensure the strict application of the agreed policy measures – and to prevent malign action by groups or individuals that have, so far, failed to sign up to the policies
  5. All of the above can be achieved without any damaging loss of the leaders’ own sovereignty: these leaders can remain masters within their own realms, provided that the above basic AI safety framework is adopted and maintained
  6. All of the above can be achieved in a way that supports evolutionary changes in the AI safety framework, as more insight is obtained; in other words, this system is agile rather than static
  7. Even though the above safety framework is yet to be properly developed and agreed, there are plenty of ideas for how it can be rapidly developed, so long as that project is given sufficient resources.

The above agreements necessarily need to include politicians of very different outlooks on the world. But similar to the negotiations over other global threats – nuclear proliferation, bioweapons, gross damage to the environment – politicians can reach across vast philosophical or ideological gulfs to forge agreement when it really matters.

That’s especially the case when the threat of a bigger shared “enemy”, so to speak, is increasingly evident.

AI superintelligence is not yet sitting at the table with global political leaders. But it will soon become clear that human politicians (as well as human leaders in other walks of life) are going to lose understanding, and lose control, of the AI systems being developed by corporations and other organisations that are sprinting at full speed.

However, as with responses to other global threats, there’s a collective action problem. Who is going to be first to make the necessary agreements, to sign up to them, and to place the AI development and deployment systems within their realms under the remote supervision of the new AI safety framework?

There are plenty of countries where the leaders may say: My country is ready to join that coalition. But unless these are the countries which control the resources that will be used to develop and deploy the potentially catastrophic AI superintelligence systems, such gestures have little utility.

To paraphrase Benito Mussolini, it’s not sufficient for the sparrows to request peace and calm: the eagles need to wholeheartedly join in too.

Thus, the agreement needs to start with the US and with China, and to extend rapidly to include the likes of Japan, the EU, Russia, Saudi Arabia, Israel, India, the UK, and both South and North Korea.

Some of these countries will no doubt initially resist making any such agreement. That’s where two problems need to be solved:

  • Ensuring the leaders in each country understand the arguments for points 1 through 7 listed above – starting with point 1 (the one that is most essential, to focus minds)
  • Setting in motion at least the initial group of signatories.

The fact that it is Donald Trump who will be holding the reins of power in Washington DC, rather than Joe Biden or Kamala Harris, introduces its own new set of complications. However, the fundamentals, as I have sketched the above, remain the same.

The key tasks for AI safety activists, therefore, remain:

  • Deepening public understanding of points 1 to 7 above
  • Where there are gaps in the details of these points, ensuring that sufficient research takes place to address these gaps
  • Building bridges to powerful leaders, everywhere, regardless of the political philosophies of these leaders, and finding ways to gain their support – so that they, in turn, can become catalysts for the next stage of global education.

Blog at WordPress.com.