dw2

6 November 2024

A bump on the road – but perhaps only a bump

Filed under: AGI, politics, risks — Tags: , , , — David Wood @ 3:56 pm

How will the return of Donald Trump to the US White House change humanity’s path toward safe transformative AI and sustainable superabundance?

Of course, the new US regime will make all kinds of things different. But at the macro level, arguably nothing fundamental changes. The tasks remain the same, for what engaged citizens can and should be doing.

At that macro level, the path toward safe sustainable superabundance runs roughly as follows. Powerful leaders, all around the world, need to appreciate that:

  1. For each of them, it is in their mutual self-interest to constrain the development and deployment of what could become catastrophically dangerous AI superintelligence
  2. The economic and humanitarian benefits that they each hope could be delivered by advanced AI, can in fact be delivered by AI which is restricted from having features of general intelligence; that is, utility AI is all that we need
  3. There are policy measures which can be adopted, around the world, to prevent the development and deployment of catastrophically dangerous AI superintelligence – for example, measures to control the spread and use of vast computing resources
  4. There are measures of monitoring and auditing which can also be adopted, around the world, to ensure the strict application of the agreed policy measures – and to prevent malign action by groups or individuals that have, so far, failed to sign up to the policies
  5. All of the above can be achieved without any damaging loss of the leaders’ own sovereignty: these leaders can remain masters within their own realms, provided that the above basic AI safety framework is adopted and maintained
  6. All of the above can be achieved in a way that supports evolutionary changes in the AI safety framework, as more insight is obtained; in other words, this system is agile rather than static
  7. Even though the above safety framework is yet to be properly developed and agreed, there are plenty of ideas for how it can be rapidly developed, so long as that project is given sufficient resources.

The above agreements necessarily need to include politicians of very different outlooks on the world. But similar to the negotiations over other global threats – nuclear proliferation, bioweapons, gross damage to the environment – politicians can reach across vast philosophical or ideological gulfs to forge agreement when it really matters.

That’s especially the case when the threat of a bigger shared “enemy”, so to speak, is increasingly evident.

AI superintelligence is not yet sitting at the table with global political leaders. But it will soon become clear that human politicians (as well as human leaders in other walks of life) are going to lose understanding, and lose control, of the AI systems being developed by corporations and other organisations that are sprinting at full speed.

However, as with responses to other global threats, there’s a collective action problem. Who is going to be first to make the necessary agreements, to sign up to them, and to place the AI development and deployment systems within their realms under the remote supervision of the new AI safety framework?

There are plenty of countries where the leaders may say: My country is ready to join that coalition. But unless these are the countries which control the resources that will be used to develop and deploy the potentially catastrophic AI superintelligence systems, such gestures have little utility.

To paraphrase Benito Mussolini, it’s not sufficient for the sparrows to request peace and calm: the eagles need to wholeheartedly join in too.

Thus, the agreement needs to start with the US and with China, and to extend rapidly to include the likes of Japan, the EU, Russia, Saudi Arabia, Israel, India, the UK, and both South and North Korea.

Some of these countries will no doubt initially resist making any such agreement. That’s where two problems need to be solved:

  • Ensuring the leaders in each country understand the arguments for points 1 through 7 listed above – starting with point 1 (the one that is most essential, to focus minds)
  • Setting in motion at least the initial group of signatories.

The fact that it is Donald Trump who will be holding the reins of power in Washington DC, rather than Joe Biden or Kamala Harris, introduces its own new set of complications. However, the fundamentals, as I have sketched the above, remain the same.

The key tasks for AI safety activists, therefore, remain:

  • Deepening public understanding of points 1 to 7 above
  • Where there are gaps in the details of these points, ensuring that sufficient research takes place to address these gaps
  • Building bridges to powerful leaders, everywhere, regardless of the political philosophies of these leaders, and finding ways to gain their support – so that they, in turn, can become catalysts for the next stage of global education.

23 January 2014

The future of learning and the future of climate change

Filed under: climate change, collaboration, education — Tags: , , , , — David Wood @ 6:52 pm

Yesterday, I spent some time at the BETT show in London’s ExCeL centre. BETT describes itself as:

the world’s leading event for learning technology for education professionals…  dedicated to showcasing the best in UK and international learning technology products, resources, and best practice… in times where modern learning environments are becoming more mobile and ‘learning anywhere’ is more of a possibility.

I liked the examples that I saw of increasing use of Google Apps in education, particularly on Chrome Books. These examples were described by teachers who had been involved in trials, at all levels of education. The teachers had plenty of heart-warming stories of human wonderment, of pupils helping each other, and of technology taking a clear second place to learning.

FutureLearn logoI was also impressed to hear some updates about the use of MOOCs – “Massive open online courses”. For example, I was encouraged about what I heard at BETT about the progress of the UK-based FutureLearn initiative.

As Wikipedia describes FutureLearn,

FutureLearn is a massive open online course (MOOC) platform founded in December 2012 as a company majority owned by the UK’s Open University. It is the first UK-led massive open online course platform, and as of October 2013 had 26 University partners and – unlike similar platforms – includes three non-university partners: the British Museum, the British Council and the British Library.

Among other things, my interest in FutureLearn was to find out if similar technology might be used, at some stage, to help raise better awareness of general futurist topics, such as the Technological Singularity, Radical Life Extension, and Existential Risks – the kind of topics that feature in the Hangout On Air series that I run. I remain keen to develop what I’ve called “London Futurists Academy”. Could a MOOC help here?

I resolved that it was time for me to gain first-hand experience of one of these systems, rather than just relying on second-hand experience from other people.

Climate_change_course_image-01

I clicked on the FutureLearn site to see which courses might be suitable for me to join. I was soon avidly reading the details of their course Climate change: challenges and solutions:

This course aims to explain the science of climate change, the risks it poses and the solutions available to reduce those risks.

The course is aimed at the level of students entering university, and seeks to provide an inter-disciplinary introduction to what is a broad field. It engages a number of experts from the University of Exeter and a number of partner organisations.

The course will set contemporary human-caused climate change within the context of past nature climate variability. Then it will take a risk communication approach, balancing the ‘bad news’ about climate change impacts on natural and human systems with the ‘good news’ about potential solutions. These solutions can help avoid the most dangerous climate changes and increase the resilience of societies and ecosystems to those climate changes that cannot be avoided.

The course lasts eight weeks, and is described as requiring about three hours of time every week. Participants take part entirely from their own laptop. There is no fee to join. The course material is delivered via a combination of videos (with attractive graphics), online documents, and quizzes and tests. Participants are also encouraged to share some of their experiences, ideas, and suggestions via the FutureLearn online social network.

For me, the timing seemed almost ideal. The London Futurists meetup last Saturday had addressed the topic of climate change. There’s an audio recording of the event here (it lasts just over two hours). The speaker, Duncan Clark, was excellent. But discussion at the event (and subsequently continued online) confirmed that there remain lots of hard questions needing further analysis.

I plan to invite other speakers on climate change topics to forthcoming London Futurists events, but in the meantime, this FutureLearn course seems like an excellent opportunity for many people to collectively deepen their knowledge of the overall subject.

I say this after having worked my way through the material for the first week of the course. I can’t say I learnt anything surprising, but the material was useful background to many of the discussions that I keep getting involved in. It was well presented and engaging. I paid careful attention, knowing there would be an online multiple choice test at the end of the week’s set of material. A couple of the questions in the test needed me to think quite carefully before answering. After I answered the final question, I was pleased to see the following screen:

Week 1 resultIt’s fascinating to read online the comments from other participants in the course. It looks like over 1,700 people have completed the first week’s material. Some of the participants are aged in their 70s or 80s, and it’s their first experience with computer learning.

There hasn’t been much controversy in the first week’s topics. One part straightforwardly explained the reasons why the observed changes in global temperature over the last century cannot be attributed to changes in solar radiation, even though changes in solar radiation could be responsible for the “Little Ice Age” between 1550-1850. That part, like all the other material from the first week, seemed completely fair and objective to me. I look forward to the subsequent sections.

I said that the timing of the course was almost ideal. However, it started on the 13th of January, and FutureLearn only allow people to join the course for up to 14 days after the official start date.

That means if any readers of this blog wish to follow my example and enrol in this course too, you’ll have to do so by this Sunday, the 26th of January.

I do hope that other people join the course, so we can compare notes, as we explore pathways to improved collaborative learning.

PS for my overall thoughts on climate change, see some previous posts in this blog, such as “Six steps to climate catastrophe” and “Risk blindness and the forthcoming energy crash”.

Blog at WordPress.com.