dw2

23 May 2024

A potential goldmine of unanswered questions

Filed under: AGI, risks — Tags: , , — David Wood @ 12:53 pm

It was a great event, people said. But it left a lot of questions unanswered.

The topic was progress on global AI safety. Demonstrating a variety of domain expertise, the speakers and panellists offered a range of insightful analysis, and responded to each others’ ideas. The online audience had the chance to submit questions via the Slido tool. The questions poured in (see the list below).

As the moderator of the event, I tried to select a number of the questions that had received significant audience support via thumbs-up votes. As the conversation proceeded, I kept changing my mind about which questions I would feed into the conversation next. There were so many good questions, I realized.

Far too soon, the event was out of time – leaving many excellent questions unasked.

With the hope that this can prompt further discussion about key options for the future of AI, I’m posting the entire list of questions below. That list starts with the ones with the highest number of votes and moves to those with the least (but don’t read too much into what the audience members managed to spot and upvote whilst also listening to fascinating conversation among the panellists).

Before you dive into that potential goldmine, you may wish to watch the recording of the event itself:

Huge thanks are due to:

  • The keynote speaker:
    • Yoshua Bengio, professor at the University of Montreal (MILA institute), a recipient of the Turing Award who is considered to be one of the fathers of Deep Learning, and the world’s most cited computer scientist
  • The panellists:
    • Will Henshall, editorial fellow at TIME Magazine, who covers tech, with a focus on AI; one recent piece he wrote details big tech lobbying on AI in Washington DC
    • Holly Elmore, an AI activist and Executive Director of PauseAI US, who holds a PhD in Organismic & Evolutionary Biology from Harvard University
    • Stijn Bronzwaer, an AI and technology journalist at the leading Dutch newspaper NRC Handelsblad, who co-authored a best-selling book on booking.com, and is the recipient of the investigative journalism award De Loep
    • Max Tegmark, a physics professor at MIT, whose current research focuses on the intersection of physics and AI, and who is also president and cofounder of the Future of Life Institute (FLI)
    • Jaan Tallinn, cofounder of Skype, CSER, and FLI, an investor in DeepMind and Anthropic, and a leading voice in AI Safety
    • Arjun Ramani, who writes for The Economist about economics and technology; his writings on AI include a piece on what humans might do in a world of superintelligence
  • The organizers, from Existential Risk Observatory
    • Otto Barten, Director, the lead organizer
    • Jesper Heshusius and Joep Sauren, for vital behind-the-scenes support
  • Everyone who submitted a question, or who expressed their opinions via thumbs-up voting!

And now for that potential goldmine of questions:

  1. Which role do you see for the UN to play in all of this?
  2. We (via Prof. Markus Krebsz) authored a UN AI CRA / Declaration and are now working towards a UN treaty on product w/embedded AI. Would you assist us, pls?
  3. There is much disagreement on how to best mitigate xrisk from AI. How can we build consensus and and avoid collective decision paralysis without drastic action?
  4. Regarding education: Do we need a high-impact documentary like “An Inconvenient Truth” for AI existential risk? Would that kickstart the global discussion?
  5. Which role do you think is there for the United Nations / International community to play to protect humanity from the harms of AGI?
  6. What is more important: An informed public or informed high-level decision makers? What would be the best way to inform them and start a global discussion?
  7. Do you think that introducing Knightian Uncertainties beside probabilities and Risk for AI and ML algorithms could be useful for AI safety?
  8. What would each of you say is currently the most tractable or undervalued bottleneck for mitigating xrisk from AI? What new efforts would you like to see?
  9. What are in your opinion the key bottlenecks in AI Safety? talent, funding, # of AI Safety organisations, …?
  10. How would each panel member like to see the Bletchley Declaration expanded on?
  11. Bengio et al.’s new paper in Science has some strong wording, but stops short of calling for a global moratorium on AGI. Isn’t this the most prudent option now?
  12. What do you think of Yudkowsky and other’s concerns about oracle AIs, and why is the AI Scientist approach not vulnerable to those criticisms?
  13. Are there realistic early warning criteria (regarding AGI beginning to become an ASI) that could be written into law and used to prevent this?
  14. What are your thoughts on PauseAI?
  15. “Safe by design” is one thing, but even if that’s possible, how do we stop unsafe ASI from ever being built?
  16. Professor Bengio – How much have you heard about what’s been happening in Seoul, and is there anything you can share on countries’ updates after Bletchley Park?
  17. What is your opinion on AI Advisory Board of UN? Do you think there could be conflict between AI CEOs and Govt/Policy makers?
  18. What are in your opinion the most neglected approaches to AI Safety? particular technical/governance approaches? others (activism,…)?
  19. A harmful AI can fake alignment under evaluation, as written in Science this week. Isn’t it this an unsolvable problem, invalidating most current strategies?
  20. What is the biggest barrier to educate people on AI risks?
  21. What is more important: An informed public or informed high-level decision makers? What would be the best way to educate them and start a global discussion?
  22. Can people stop interrupting the only woman on the panel please? Bad look
  23. Do you think more focus should be on clarifying that existential risks must not mean that AI will kill everyone? Perhaps focus on the slow epistemic failures?
  24. What do you want to say to a young AI engineer looking to push the state of the art of capability research?
  25. Can you expand on why you’re confident that evaluations are insufficient? How far do you think we could get by instituting rigorous evaluation requirements?
  26. Bengio: “the world is too complicated to have hard guarantees”. How do we survive without hard guarantees (in the limit of ASI)!?
  27. Any tips on where recent graduates from AI related masters can best contribute to the AI safety field?
  28. Oh no…what a serious lack of diversity in speakers. Was this an oversight ? Isn’t this one of the major issues why we have these AI risks ?
  29. I don’t want to be replaced by ai. I think by designing it this way we can evolve alongside it and learn with it
  30. Do you think society is really ready for ai systems and the responsibility of it on all of us as humanity?
  31. How far do you think we could get by instituting rigorous evaluation requirements? Is it possible that could be 95% of the work to ensure safe AI?
  32. What do you make of the events surrounding the release of Bing Chat / “Sydney” from around a year ago? What are your takeaways from what happened there?
  33. For researchers not already well funded, who live far from AI hotspot cities, what options do they have for funding? Is immigration the only option?
  34. How can a non-computer scientist (more specifically, someone in the public sector) focus their career in such a way that it contributes to this race against AI?
  35. AI proliferates far easier when compared to other existential technologies, isn’t the question of human extinction a matter of when, not if, in any time frame?
  36. How to prevent a future AI, with intelligence incomprehensible to us, to develop an emerging agency that allows it to depart from any pre-directed alignment?
  37. Safe by design: One AI system transforms Perception into a symbolic knowledge graph and one AI system transforming the symbolic knowledge graph to task space
  38. Your Bayesian AI scientist is already quite good – just add a task execution system and a visual representation of its knowledge as a graph. Alignment done.
  39. Humans need to do the decisions on the task execution. We can’t have a black box do that. Motivation about setting tasks and constraints is human territory.
  40. Yes it isn’t all consistent in the symbolic knowledge graph but one can add that by adding a consistency metric between nodes in the graph.
  41. Explaining the depth of research program is too much considering the target audience is general public, policymakers, and journalists.
  42. What would a safe AI’s goal be?
  43. Do you think AI companies should be forced to be regulated instead of given a choice, for AI safety?
  44. What about a bilateral treaty between the US and China as a start? (Re global moratorium)
  45. Can there be subtitles please?
  46. I think we can align it safely by not letting it have agentic goal setting. humans should decide on the guiderails and steps taken – task specific
  47. Safety by design: One AI summing up all concepts in a symbolic knowledge graph – task execution is the combination of these symbolic concepts. Humans can see the path the AI wants to take in the graph and decide or alter the path taken and approve it before execution
  48. What is the future of Big Tech lobbying in favour of bad practices for profit?
  49. On incentives, what about creating an “AI safety credits” system like carbon credits to reward companies investing in safer AI and penalize the ones who don’t?
  50. Unsafe use can be mitigated made by design by deleting unsafe concepts from the symbolic knowledge graph – KNOWLEDGE Graph in between is all you need !!
  51. Do you have any tips on where/how recent graduates from AI related masters can best contribute to AI safety? (Many safety companies require work experience)
  52. @Yoshua: Are there technical research directions you feel are undervalued?
  53. In education, you think our education needs to be updated for the AI. not still using 1960 education methods, syllabus etc?
  54. How exactly will AI ‘kill’ everyone?
  55. There is something you are missing. It’s a symbolic graph representation. This is really painful to watch
  56. Do you think, politicians are absolutely ill equipped to even guide their populace on AI safety issues and how to go forward in mitigation of risks, utilise AI?
  57. Can there be subtitles for the YouTube video livestream?
  58. Can you elaborate on the relation between your work and Tegmark and Davidad’s efforts?
  59. Do the underpinning theories for providing proofs of safety, or quantification of risks exist for current + emerging AI? If not, how and where can we get them?
  60. How divergent is our approach to A.I. safety given its existential import? Are we involving many fields, and considering unconventional problem solving methods?
  61. By letting task execution happen on a symbolic knowledge graph we can visually see all the path that could be taken by the task execution system and decide
  62. How can I write a email to Yoshua Bengio – I think I got a good idea I want to specify in more detail than 200 characters!
  63. What are the most promising tech AI Safety agendas?
  64. “”Understand LLMs”” (evals, interp, …) OR “”Control”” OR “”Make AI solve it”” OR “”Theory”” (Galaxy-brain, …)?”
  65. Symbolic knowledge graph in between perception AI net and Task execution AI net – IS ALL YOU NEED
  66. Can partner with CERAI at IIT Madras – for Research Support (Prof Ravi Balaraman). We have partnerships + they are useful for Responsible AI support and help.
  67. What is your opinion on the fear mongering crowd? People asking for a pause are scared of losing their jobs?
  68. Would you agree that ‘HARM’ is dependent on prioritized values?
  69. Does your safety model consider multiple AGI when some of them competing for resources with humans and other AGIs?
  70. Hi. How are the theorists’ ideas, such as yours, going to be fed into some sort of pipeline actioned by the companies developing this tech?
  71. The symbolic knowledge graph can have the bayesian idea from Bengio by adding coherence with other symbolic concepts.
  72. Yoshua, do you think AI systems need to be siloed from any sort of influence from governments, bad actors/states and from companies, especially from competitors?
  73. Could we leverage our social media platforms with current AI to aid in problem solving of complex problems like climate change & A.I. safety? It’s underutilized.
  74. How is lobbying for AI related to lack of privacy and anatomy for the general public is related?
  75. Is the availability of AI going to impact the education and learning ability of the next generation?
  76. Should we assume coordination failure leading to catastrophic outcome is inevitable and focus resources on how to poison AI systems, some kind of hacking?
  77. Please put my idea with symbolic knowledge graphs as a middle layer and human in the loop at task execution up. I think this can change everything
  78. Do you think our education needs to be updated for the AI era. Not still using 1960 education methods, syllabus etc as confusing next generation
  79. AI is similar to the nuclear field in that, after Hiroshima, it continued with Atoms for Peace (good) and the arms race (bad). AI still didn’t have a Hiroshima.
  80. Why is nobody talking about how the AI alignment theorists’ work is going to feed into the AI development work?? If not, then you are merely a talking shop.
  81. Current LLM models are mostly trained with YouTube and other public data. Organized crime will have snatched an unaligned LLM model and trained it using darkweb
  82. Agree that aligning an LLM is an unsolved, and if solvable probably expensive to solve. The obvious low-cost solution to align AI is: do not use LLM. Comments?
  83. If A.I. becomes increasingly competent will we see a widespread infatuation with A.I. models? Stopping a group is one thing. What if it involves much of humanity?
  84. X-Genners have grown accustomed not to interfere in History’s Natural Progression – Back to Future I-II. Is the AI going to be Paradoxical or Unity of Consciousness?
  85. Where do you stand on the discussions on open source ? I worry we may lose the opportunity to profit from it in terms of improving the lack of democracy ?
  86. Where have you been most surprised in the past couple of years, or where have your views changed the most?
  87. Liability & tort law: re incentives, can we tweak damages? Pay for what happened, but also proportionally penalize taking a clear x% risk that did not manifest.
  88. Could it also be that so many people are benefitting from AI that they don’t want you to stop making it available and further developed?

Which of these questions interest you the most?

Image credit (above): Midjourney imagines audience members disappointed that their questions about AI safety weren’t featured in an otherwise excellent panel discussion.

19 March 2020

Improving online events, for the sake of a better discussion of what truly matters

In a time of travel restrictions and operating from home, we’re all on a learning curve. There’s much for us to find out about alternatives to meeting in our usual physical locations.

London Futurists have been meeting in various physical locations for twelve years. We’ve also held a number of online gatherings over that time, using tools such as Google Hangouts on Air. But now the balance needs to shift. Given the growing Covid-19 lockdown, all London Futurists physical meetings are cancelled for the time being. While the lockdown continues, the group’s activities will be 100% online.

But what does this mean in practice?

I’d like to share some reflections from the first of this new wave of London Futurists events. That online gathering took place on Saturday, 14th March, using the meeting platform Zoom.

Hopefully my observations can help others to improve their own online events. Hopefully, too, readers of this blog will offer answers or suggestions in response to questions I raise.

Context: our event

Our event last Saturday was recorded, and the footage subsequently edited – removing, for example, parts where speakers needed to be told their microphones were muted. Here’s a copy of the resulting video:

By prior arrangement, five panellists gave short introductory talks, each lasting around 5-10 minutes, to set the stage for group discussion. Between 50 and 60 audience participants were logged into the event throughout. Some of them spoke up during the event; a larger number participated in an online text chat discussion that proceeded in parallel (there’s a lightly edited copy of the text discussion here).

As you can see from the recording, the panellists and the other participants raised lots of important points during the discussion. I’ll get back to these shortly, in another blogpost. But first, some thoughts about the tools and the process that were used for this event.

Context: Zoom

Zoom is available at a number of different price levels:

  • The “Free” level is restricted to meetings of up to 40 minutes.
  • The “Pro” level – which costs UKP £11.99 per month – supports longer meetings (up to 24 hours), recording of events, and other elements of admin and user management. This is what I use at the moment.
  • I’ve not yet explored the more expensive versions.

Users participating in an event can can turn their cameras on or off, and can share their screen (in order, for example, to present slides). Participants can also choose at any time to see a view of the video feeds from all participants (up to 25 on each page), or a “presenter view” that focuses on the person who Zoom detects as the speaker.

Recording can take place locally, on the host’s computer (and, if enabled by the organiser, on participants’ computers). Recording can also take place on the Zoom cloud. In this case, what is recorded (by default) is the “presenter view”.

The video recording can subsequently be downloaded and edited (using any video editing software – what I use is Cyberlink PowerDirector).

Limitations and improvements

I switched some time ago from Google Hangouts-on-Air (HoA) to Zoom, when Google reorganised their related software offerings during 2019.

One feature of the HoA software that I miss in Zoom is the ability for the host to temporarily “blue box” a participant, so that their screen remains highlighted, regardless of which video feeds contain speech or other noises. Without this option, what happens – as you can see from the recording of Saturday’s event – is that the presentation view can jump to display the video from a participant that is not speaking at that moment. For five seconds or so, the display shows the participant staring blankly at the camera, generally without realising that the focus is now on them. What made Zoom shift the focus is that it detected some noise from that video feed -perhaps a cough, a laugh, a moan, a chair sliding across the floor, or some background discussion.

(Participants in the event needn’t worry, however, about their blank stares or other inadvertent activity being contained in the final video. While editing the footage, I removed all such occurrences, covering up the displays, while leaving the main audio stream in place.)

In any case, participants should mute their microphones when not speaking. That avoids unwanted noise reaching the event. However, it’s easy for people to neglect to do so. For that reason, Zoom provides the host with admin control over which mics are on or off at any time. But the host may well be distracted too… so the solution is probably for me to enrol one or two participants with admin powers for the event, and ask them to keep an eye on any mics being left unmuted at the wrong times.

Another issue is the variable quality of the microphones participants were using. If the participant turns their head while speaking – for example, to consult some notes – it can make it hard to hear what they’re saying. A better solution here is to use a head-mounted microphone.

A related problem is occasional local bandwidth issues when a participant is speaking. Some or all of what they say may be obscured, slurred, or missed altogether. The broadband in my own house is a case in point. As it happens, I have an order in the queue to switch my house to a different broadband provider. But this switch is presently being delayed.

Deciding who speaks

When a topic is thought-provoking, there are generally are lots of people with things to contribute to the discussion. Evidently, they can’t all talk at once. Selecting who speaks next – and deciding how long they can speak before they might need to be interrupted – is a key part of chairing successful meetings.

One guide to who should be invited to speak next, at any stage in a meeting, is the set of comments raised in the text chat window. However, in busy meetings, important points raised can become lost in the general flow of messages. Ideally, the meeting software will support a system of voting, so that other participants can indicate their choices of which questions are the most interesting. The questions that receive the most upvotes will become the next focus of the discussion.

London Futurists have used such software in the past, including Glisser and Slido, at our physical gatherings. For online events, ideally the question voting mechanism will be neatly integrated with the underlying platform.

I recently took part in one online event (organised by the Swiss futurist Gerd Leonhard) where the basic platform was Zoom and where there was a “Q&A” voting system for questions from the audience. However, I don’t see such a voting system in the Zoom interface that I use.

Added on 20th March

Apparently there’s a Webinar add-on for Zoom that provides better control of meetings, including the Q&A voting system. The additional cost of this add-on starts from UKP £320 per annum. I’ll be looking into this further. See this feature comparison page.

Thanks to Joe Kay for drawing this to my attention!

Summarising key points

The video recording of our meeting on Saturday lasts nearly 100 minutes. To my mind, the discussion remained interesting throughout. However, inevitably, many potential viewers will hesitate before committing 100 minutes of their time to watch the entirety of that recording. Even if they watch the playback at an accelerated speed, they would probably still prefer access to some kind of edited highlights.

Creating edited highlights of recordings of London Futurists events has long been a “wish list” item for me. I can appreciate that there’s a particular skill to identifying which parts should be selected for inclusion in any such summary. I’ll welcome suggestions on how to do this!

Learning together

More than ever, what will determine our success or failure in coming to terms with the growing Covid-19 crisis is the extent to which positive collaboration and a proactive technoprogressive mindset can pull ahead of humanity’s more destructive characteristics.

That “race” was depicted on the cover of the collection of the ebook of essays published by London Futurists in June 2014, “Anticipating 2025”. Can we take advantage of our growing interconnectivity to spread, not dangerous pathogens or destructive “fake news”, but good insights about building a better future?

That was a theme that emerged time and again during our online event last Saturday.

I’ll draw this blogpost towards a close by sharing some excepts from the opening chapter from Anticipating 2025.

Four overlapping trajectories

The time period up to 2025 can be considered as a race involving four overlapping trajectories: technology, crisis, collaboration, and mindset.

The first trajectory is the improvement of technology, with lots of very positive potential. The second, however, has lots of very negative potential: it is the growth in likelihood of societal crisis:

  • Stresses and strains in the environment, with increased climate chaos, and resulting disputes over responsibility and corrective action
  • Stresses and strains in the financial system, which share with the environment the characteristics of being highly complex, incompletely understood, weakly regulated, and subject to potential tipping points for fast-accelerating changes
  • Increasing alienation, from people who feel unable to share in the magnitude of the riches flaunted by the technologically fortunate; this factor is increased by the threats from technological unemployment and the fact that, whilst the mean household income continues to rise, the median household income is falling
  • Risks from what used to be called “weapons of mass destruction” – chemical, biological, or even nuclear weapons, along with cyber-weapons that could paralyse our electronics infrastructure; there are plenty of “angry young men” (and even angry middle-aged men) who seem ready to plunge what they see as a corrupt world into an apocalyptic judgement.

What will determine the outcome of this race, between technological improvement and growing risk of crises? It may be a third trajectory: the extent to which people around the world are able to collaborate, rather than compete. Will our tendencies to empathise, and to build a richer social whole, triumph over our equally deep tendencies to identify more closely with “people like us” and to seek the well-being of our “in-group” ahead of that of other groups?

In principle, we probably already have sufficient knowledge, spread around the world, to solve all the crises facing us, in a smooth manner that does not require any significant sacrifices. However, that knowledge is, as I said, spread – it does not cohere in just a single place. If only we knew what we knew. Nor does that knowledge hold universal assent – far from it. It is mocked and distorted and undermined by people who have vested interests in alternative explanations – with the vested interests varying among economic, political, ideological, and sometimes sheer human cussedness. In the absence of improved practical methods for collaboration, our innate tendencies to short-term expedience and point-scoring may rule the day – especially when compounded by an economic system that emphasises competition and “keeping up with the Joneses”.

Collaborative technologies such as Wikipedia and open-source software point the way to what should be possible. But they are unlikely to be sufficient, by themselves, to heal the divisions that tend to fragment human endeavours. This is where the fourth, and final, trajectory becomes increasingly important – the transformation of the philosophies and value systems that guide our actions.

If users are resolutely suspicious of technologies that would disturb key familiar aspects of “life as we know it”, engineers will face an uphill battle to secure sufficient funding to bring these technologies to the market – even if society would eventually end up significantly improved as a result.

Politicians generally take actions that reflect the views of the electorate, as expressed through public media, opinion polls, and (occasionally) in the ballot box. However, the electorate is subject to all manners of cognitive bias, prejudice, and continuing reliance on rules of thumb which made sense in previous times but which have been rendered suspect by changing circumstances. These viewpoints include:

  • Honest people should put in forty hours of work in meaningful employment each week
  • People should be rewarded for their workplace toil by being able to retire around the age of 65
  • Except for relatively peripheral matters, “natural methods” are generally the best ones
  • Attempts to redesign human nature – or otherwise to “play God” – will likely cause disaster
  • It’s a pointless delusion to think that the course of personal decay and death can be averted.

In some cases, long-entrenched viewpoints can be overturned by a demonstration that a new technology produces admirable results – as in the case of IVF (in-vitro fertilisation). But in other cases, minds need to be changed even before a full demonstration can become possible.

It’s for this reason that I see the discipline of “culture engineering” as being equally important as “technology engineering”. The ‘culture’ here refers to cultures of humans, not cells. The ‘engineering’ means developing and applying a set of skills – skills to change the set of prevailing ideas concerning the desirability of particular technological enhancements. Both technology engineering and culture engineering are deeply hard skills; both need a great deal of attention.

A core part of “culture engineering” fits under the name “marketing”. Some technologists bristle at the concept of marketing. They particularly dislike the notion that marketing can help inferior technology to triumph over superior technology. But in this context, what do “inferior” and “superior” mean? These judgements are relative to how well technology is meeting the dominant desires of people in the marketplace.

Marketing means selecting, understanding, inspiring, and meeting key needs of what can be called “influence targets” – namely, a set of “tipping point” consumers, developers, and partners. Specifically, marketing includes:

  • Forming a roadmap of deliverables, that build, step-by-step, to delivering something of great benefit to the influence targets, but which also provide, each step of the way, something with sufficient value to maintain their active interest
  • Astutely highlighting the ways in which present (and forthcoming) products will, indeed, provide value to the influence targets
  • Avoiding any actions which, despite the other good things that are happening, alienate the influence targets; and in the event any such alienation emerges, taking swift and decisive action to address it.

Culture engineering involves politics as well as marketing. Politics means building alliances that can collectively apply power to bring about changes in regulations, standards, subsidies, grants, and taxation. Choosing the right partners, and carefully managing relationships with them, can make a big difference to the effectiveness of political campaigns. To many technologists, “politics” is as dirty a word as “marketing”. But once again, mastery of the relevant skillset can make a huge difference to the adoption of technologies.

The final component of culture engineering is philosophy – sets of arguments about fundamentals and values. For example, will human flourishing happen more fully under simpler lifestyles, or by more fully embracing the radical possibilities of technology? Should people look to age-old religious traditions to guide their behaviour, or instead seek a modern, rational, scientific basis for morality? And how should the freedoms of individuals to experiment with potentially dangerous new kinds of lifestyle be balanced against the needs of society as a whole?

“Philosophy” is (you guessed it) yet another dirty word, in the minds of many technologists. To these technologists, philosophical arguments are wastes of time. Yet again, I will disagree. Unless we become good at philosophy – just as we need to become good at both politics and marketing – we will fail to rescue the prevailing culture from its unhelpful mix of hostility and apathy towards the truly remarkable potential to use technology to positively transcend human nature. And unless that change in mindset happens, the prospects are uncertain for the development and adoption of the remarkable technologies of abundance mentioned earlier.

[End of extract from Anticipating 2025.]

How well have we done?

On the one hand, the contents of the 2014 London Futurists book “Anticipating 2025” are prescient. These chapters highlight many issues and opportunities that have grown in importance in the intervening six years.

On the other hand, I was brought down to earth by an email reply I received last week to the latest London Futurists newsletter:

I’m wondering where the Futurism is in this reaction.

Maybe the group is more aptly Reactionism.

I wanted to splutter out an answer: the group (London Futurists) has done a great deal of forward thinking over the years. We have looked at numerous trends and systems, and considered possible scenarios arising from extrapolations and overlaps. We have worked hard to clarify, for these scenarios, the extent to which they are credible and desirable, and ways in which the outcomes can be influenced.

But on reflection, a more sober thought emerged. Yes, we futurists have been trying to alert the rest of society to our collective lack of preparedness for major risks and major opportunities ahead. We have discussed the insufficient resilience of modern social systems – their fragility and lack of sustainability.

But have our messages been heard?

The answer is: not really. That’s why Covid-19 is causing such a dislocation.

It’s tempting to complain that the population as a whole should have been listening to futurists. However, we can also ask, how should we futurists change the way we talk about our insights, so that people pay us more attention?

After all, there are many worse crises potentially just around the corner. Covid-19 is by no means the most dangerous new pathogen that could strike humanity. And there are many other types of risk to consider, including malware spreading out of control, the destruction of our electronics infrastructure by something similar to the 1859 Carrington Event, an acceleration of chaotic changes in weather and climate, and devastating wars triggered by weapons systems overseen by AI software whose inner logic no-one understands.

It’s not just a new mindset that humanity needs. It’s a better way to have discussions about fundamentals – discussions about what truly matters.

Footnote: with thanks

Special thanks are due to the people who boldly stepped forwards at short notice as panellists for last Saturday’s event:

and to everyone else who contributed to that discussion. I’m sorry there was no time to give sufficient attention to many of the key points raised. As I said at the end of the recording, this is a kind of cliffhanger.

7 December 2017

The super-opportunities and super-risks of super-AI

Filed under: AGI, Events, risks, Uncategorized — Tags: , , — David Wood @ 7:29 pm

2017 has seen more discussion of AI than any preceding year.

There has even been a number of meetings – 15, to be precise – in the UK Houses of Parliament, of the APPG AI – an “All-Party Parliamentary Group on Artificial Intelligence”.

According to its website, the APPG AI “was set up in January 2017 with the aim to explore the impact and implications of Artificial Intelligence”.

In the intervening 11 months, the group has held 7 evidence meetings, 4 advisory group meetings, 2 dinners, and 2 receptions. 45 different MPs, along with 7 members of the House of Lords and 5 parliamentary researchers, have been engaged in APPG AI discussions at various times.

APPG-AI

Yesterday evening, at a reception in Parliament’s Cholmondeley Room & Terrace, the APPG AI issued a 12 page report with recommendations in six different policy areas:

  1. Data
  2. Infrastructure
  3. Skills
  4. Innovation & entrepreneurship
  5. Trade
  6. Accountability

The headline “key recommendation” is as follows:

The APPG AI recommends the appointment of a Minister for AI in the Cabinet Office

The Minister would have a number of different responsibilities:

  1. To bring forward the roadmap which will turn AI from a Grand Challenge to a tool for untapping UK’s economic and social potential across the country.
  2. To lead the steering and coordination of: a new Government Office for AI, a new industry-led AI Council, a new Centre for Data Ethics and Innovation, a new GovTech Catalyst, a new Future Sectors Team, and a new Tech Nation (an expansion of Tech City UK).
  3. To oversee and champion the implementation and deployment of AI across government and the UK.
  4. To keep public faith high in these emerging technologies.
  5. To ensure UK’s global competitiveness as a leader in developing AI technologies and capitalising on their benefits.

Overall I welcome this report. It’s a definite step in the right direction. Via a programme of further evidence meetings and workshops planned throughout 2018, I expect real progress can be made.

Nevertheless, it’s my strong belief that most of the public discussion on AI – including the discussions at the APPG AI – fail to appreciate the magnitude of the potential changes that lie ahead. There’s insufficient awareness of:

  • The scale of the opportunities that AI is likely to bring – opportunities that might better be called “super-opportunities”
  • The scale of the risks that AI is likely to bring – “super-risks”
  • The speed at which it is possible (though by no means guaranteed) that AI could transform itself via AGI (Artificial General Intelligence) to ASI (Artificial Super Intelligence).

These are topics that I cover in some of my own presentations and workshops. The events organisation Funzing have asked me to run a number of seminars with the title “Assessing the risks from superintelligent AI: Elon Musk vs. Mark Zuckerberg…”

DW Dec Funzing Singularity v2

The reference to Elon Musk and Mark Zuckerberg reflects the fact that these two titans of the IT industry have spoken publicly about the advent of superintelligence, taking opposing views on the balance of opportunity vs. risk.

In my seminar, I take the time to explain their differing points of view. Other thinkers on the subject of AI that I cover include Alan Turing, IJ Good, Ray Kurzweil, Andrew Ng, Eliezer Yudkowsky, Stuart Russell, Nick Bostrom, Isaac Asimov, and Jaan Tallinn. The talk is structured into six sections:

  1. Introducing the contrasting ideas of Elon Musk and Mark Zuckerberg
  2. A deeper dive into the concepts of “superintelligence” and “singularity”
  3. From today’s AI to superintelligence
  4. Five ways that powerful AI could go wrong
  5. Another look at accelerating timescales
  6. Possible responses and next steps

At the time of writing, I’ve delivered this Funzing seminar twice. Here’s a sampling of the online reviews:

Really enjoyed the talk, David is a good presenter and the presentation was very well documented and entertaining.

Brilliant eye opening talk which I feel very effectively conveyed the gravity of these important issues. Felt completely engaged throughout and would highly recommend. David was an excellent speaker.

Very informative and versatile content. Also easy to follow if you didn’t know much about AI yet, and still very insightful. Excellent Q&A. And the PowerPoint presentation was of great quality and attention was spent on detail putting together visuals and explanations. I’d be interested in seeing this speaker do more of these and have the opportunity to go even more in depth on specific aspects of AI (e.g., specific impact on economy, health care, wellbeing, job market etc). 5 stars 🙂

Best Funzing talk I have been to so far. The lecture was very insightful. I was constantly tuned in.

Brilliant weighing up of the dangers and opportunities of AI – I’m buzzing.

If you’d like to attend one of these seminars, three more dates are in my Funzing diary:

Click on the links for more details, and to book a ticket while they are still available 🙂

30 January 2014

A brilliant example of communication about science and humanity

Mathematical Universe

Do you enjoy great detective puzzles? Do you like noticing small anomalies, and turning them into clues to an unexpected explanation? Do you like watching world-class scientists at work, piecing together insights to create new theories, and coping with disappointments when their theories appear to be disproved?

In the book “Our mathematical universe”, the mysteries being addressed are some of the very biggest imaginable:

  • What is everything made out of?
  • Where does the universe come from? For example, what made the Big Bang go “bang”?
  • What gives science its authority to speak with so much confidence about matters such as the age and size of the universe?
  • Is it true that the constants of nature appear remarkably “fine-tuned” so as to allow the emergence of life – in a way suggesting a miracle?
  • What does modern physics (including quantum mechanics) have to teach us about mind and consciousness?
  • What are the chances of other intelligent life existing in our galaxy (or even elsewhere in our universe)?
  • What lies in the future of the human race?

The author, Max Tegmark, is a Swedish-born professor of physics at MIT. He’s made a host of significant contributions to the development of cosmology – some of which you can read about in the book. But in his book, he also shows himself in my view to be a first class philosopher and a first class communicator.

Indeed, this may be the best book on the philosophy of physics that I have ever read. It also has important implications for the future of humanity.

There are some very big ideas in the book. It gives reasons for believing that our universe exists alongside no fewer than four different types of parallel universes. The “level 4 multiverse” is probably one of the grandest conceptions in all of philosophy. (What’s more, I’m inclined to think it’s the correct description of reality. At its heart, despite its grandness, it’s actually a very simple theory, which is a big plus in its favour.)

Much of the time, the writing in the book is accessible to people with pre-university level knowledge of science. On occasion, the going gets harder, but readers should be able to skip over these sections. I recommend reading the book all the way through, since the last chapter contains many profound ideas.

I think you’ll like this book if:

  • You have a fondness for pure mathematics
  • You recognise that the scientific explanation of phenomenon can be every bit as uplifting as pre-scientific supernatural explanations
  • You are ready to marvel at the ingenuity of scientific investigators going all the way back to the ancient Greeks (including those who first measured the distance from the Earth to the Sun)
  • You are critical of “quantum woo woo” hand-waving that says that quantum mechanics proves that consciousness is somehow a non-local agent (and that minds will survive bodily death)
  • You want to find more about Hugh Everett, the physicist who first proposed that “the quantum wave function never collapses”
  • You have a hunch that there’s a good answer to the question “why is there something rather than nothing?”
  • You want to see scientists in action, when they are confronted by evidence that their favoured theories are disproved by experiment
  • You’re ready to laugh at the misadventures that a modern cosmologist experiences (including eminent professors falling asleep in the audience of his lectures)
  • You’re interested in the considered viewpoint of a leading scientist about matters of human existential risk, including nuclear wars and the technological singularity.

Even more than all these good reasons, I highlight this book as an example of what the world badly needs: clear, engaging advocacy of the methods of science and reason, as opposed to mysticism and obscurantism.

Footnote: For my own views about the meaning of quantum mechanics, see my earlier blogpost “Schrödinger’s Rabbits”.

Blog at WordPress.com.