dw2

23 May 2024

A potential goldmine of unanswered questions

Filed under: AGI, risks — Tags: , , — David Wood @ 12:53 pm

It was a great event, people said. But it left a lot of questions unanswered.

The topic was progress on global AI safety. Demonstrating a variety of domain expertise, the speakers and panellists offered a range of insightful analysis, and responded to each others’ ideas. The online audience had the chance to submit questions via the Slido tool. The questions poured in (see the list below).

As the moderator of the event, I tried to select a number of the questions that had received significant audience support via thumbs-up votes. As the conversation proceeded, I kept changing my mind about which questions I would feed into the conversation next. There were so many good questions, I realized.

Far too soon, the event was out of time – leaving many excellent questions unasked.

With the hope that this can prompt further discussion about key options for the future of AI, I’m posting the entire list of questions below. That list starts with the ones with the highest number of votes and moves to those with the least (but don’t read too much into what the audience members managed to spot and upvote whilst also listening to fascinating conversation among the panellists).

Before you dive into that potential goldmine, you may wish to watch the recording of the event itself:

Huge thanks are due to:

  • The keynote speaker:
    • Yoshua Bengio, professor at the University of Montreal (MILA institute), a recipient of the Turing Award who is considered to be one of the fathers of Deep Learning, and the world’s most cited computer scientist
  • The panellists:
    • Will Henshall, editorial fellow at TIME Magazine, who covers tech, with a focus on AI; one recent piece he wrote details big tech lobbying on AI in Washington DC
    • Holly Elmore, an AI activist and Executive Director of PauseAI US, who holds a PhD in Organismic & Evolutionary Biology from Harvard University
    • Stijn Bronzwaer, an AI and technology journalist at the leading Dutch newspaper NRC Handelsblad, who co-authored a best-selling book on booking.com, and is the recipient of the investigative journalism award De Loep
    • Max Tegmark, a physics professor at MIT, whose current research focuses on the intersection of physics and AI, and who is also president and cofounder of the Future of Life Institute (FLI)
    • Jaan Tallinn, cofounder of Skype, CSER, and FLI, an investor in DeepMind and Anthropic, and a leading voice in AI Safety
    • Arjun Ramani, who writes for The Economist about economics and technology; his writings on AI include a piece on what humans might do in a world of superintelligence
  • The organizers, from Existential Risk Observatory
    • Otto Barten, Director, the lead organizer
    • Jesper Heshusius and Joep Sauren, for vital behind-the-scenes support
  • Everyone who submitted a question, or who expressed their opinions via thumbs-up voting!

And now for that potential goldmine of questions:

  1. Which role do you see for the UN to play in all of this?
  2. We (via Prof. Markus Krebsz) authored a UN AI CRA / Declaration and are now working towards a UN treaty on product w/embedded AI. Would you assist us, pls?
  3. There is much disagreement on how to best mitigate xrisk from AI. How can we build consensus and and avoid collective decision paralysis without drastic action?
  4. Regarding education: Do we need a high-impact documentary like “An Inconvenient Truth” for AI existential risk? Would that kickstart the global discussion?
  5. Which role do you think is there for the United Nations / International community to play to protect humanity from the harms of AGI?
  6. What is more important: An informed public or informed high-level decision makers? What would be the best way to inform them and start a global discussion?
  7. Do you think that introducing Knightian Uncertainties beside probabilities and Risk for AI and ML algorithms could be useful for AI safety?
  8. What would each of you say is currently the most tractable or undervalued bottleneck for mitigating xrisk from AI? What new efforts would you like to see?
  9. What are in your opinion the key bottlenecks in AI Safety? talent, funding, # of AI Safety organisations, …?
  10. How would each panel member like to see the Bletchley Declaration expanded on?
  11. Bengio et al.’s new paper in Science has some strong wording, but stops short of calling for a global moratorium on AGI. Isn’t this the most prudent option now?
  12. What do you think of Yudkowsky and other’s concerns about oracle AIs, and why is the AI Scientist approach not vulnerable to those criticisms?
  13. Are there realistic early warning criteria (regarding AGI beginning to become an ASI) that could be written into law and used to prevent this?
  14. What are your thoughts on PauseAI?
  15. “Safe by design” is one thing, but even if that’s possible, how do we stop unsafe ASI from ever being built?
  16. Professor Bengio – How much have you heard about what’s been happening in Seoul, and is there anything you can share on countries’ updates after Bletchley Park?
  17. What is your opinion on AI Advisory Board of UN? Do you think there could be conflict between AI CEOs and Govt/Policy makers?
  18. What are in your opinion the most neglected approaches to AI Safety? particular technical/governance approaches? others (activism,…)?
  19. A harmful AI can fake alignment under evaluation, as written in Science this week. Isn’t it this an unsolvable problem, invalidating most current strategies?
  20. What is the biggest barrier to educate people on AI risks?
  21. What is more important: An informed public or informed high-level decision makers? What would be the best way to educate them and start a global discussion?
  22. Can people stop interrupting the only woman on the panel please? Bad look
  23. Do you think more focus should be on clarifying that existential risks must not mean that AI will kill everyone? Perhaps focus on the slow epistemic failures?
  24. What do you want to say to a young AI engineer looking to push the state of the art of capability research?
  25. Can you expand on why you’re confident that evaluations are insufficient? How far do you think we could get by instituting rigorous evaluation requirements?
  26. Bengio: “the world is too complicated to have hard guarantees”. How do we survive without hard guarantees (in the limit of ASI)!?
  27. Any tips on where recent graduates from AI related masters can best contribute to the AI safety field?
  28. Oh no…what a serious lack of diversity in speakers. Was this an oversight ? Isn’t this one of the major issues why we have these AI risks ?
  29. I don’t want to be replaced by ai. I think by designing it this way we can evolve alongside it and learn with it
  30. Do you think society is really ready for ai systems and the responsibility of it on all of us as humanity?
  31. How far do you think we could get by instituting rigorous evaluation requirements? Is it possible that could be 95% of the work to ensure safe AI?
  32. What do you make of the events surrounding the release of Bing Chat / “Sydney” from around a year ago? What are your takeaways from what happened there?
  33. For researchers not already well funded, who live far from AI hotspot cities, what options do they have for funding? Is immigration the only option?
  34. How can a non-computer scientist (more specifically, someone in the public sector) focus their career in such a way that it contributes to this race against AI?
  35. AI proliferates far easier when compared to other existential technologies, isn’t the question of human extinction a matter of when, not if, in any time frame?
  36. How to prevent a future AI, with intelligence incomprehensible to us, to develop an emerging agency that allows it to depart from any pre-directed alignment?
  37. Safe by design: One AI system transforms Perception into a symbolic knowledge graph and one AI system transforming the symbolic knowledge graph to task space
  38. Your Bayesian AI scientist is already quite good – just add a task execution system and a visual representation of its knowledge as a graph. Alignment done.
  39. Humans need to do the decisions on the task execution. We can’t have a black box do that. Motivation about setting tasks and constraints is human territory.
  40. Yes it isn’t all consistent in the symbolic knowledge graph but one can add that by adding a consistency metric between nodes in the graph.
  41. Explaining the depth of research program is too much considering the target audience is general public, policymakers, and journalists.
  42. What would a safe AI’s goal be?
  43. Do you think AI companies should be forced to be regulated instead of given a choice, for AI safety?
  44. What about a bilateral treaty between the US and China as a start? (Re global moratorium)
  45. Can there be subtitles please?
  46. I think we can align it safely by not letting it have agentic goal setting. humans should decide on the guiderails and steps taken – task specific
  47. Safety by design: One AI summing up all concepts in a symbolic knowledge graph – task execution is the combination of these symbolic concepts. Humans can see the path the AI wants to take in the graph and decide or alter the path taken and approve it before execution
  48. What is the future of Big Tech lobbying in favour of bad practices for profit?
  49. On incentives, what about creating an “AI safety credits” system like carbon credits to reward companies investing in safer AI and penalize the ones who don’t?
  50. Unsafe use can be mitigated made by design by deleting unsafe concepts from the symbolic knowledge graph – KNOWLEDGE Graph in between is all you need !!
  51. Do you have any tips on where/how recent graduates from AI related masters can best contribute to AI safety? (Many safety companies require work experience)
  52. @Yoshua: Are there technical research directions you feel are undervalued?
  53. In education, you think our education needs to be updated for the AI. not still using 1960 education methods, syllabus etc?
  54. How exactly will AI ‘kill’ everyone?
  55. There is something you are missing. It’s a symbolic graph representation. This is really painful to watch
  56. Do you think, politicians are absolutely ill equipped to even guide their populace on AI safety issues and how to go forward in mitigation of risks, utilise AI?
  57. Can there be subtitles for the YouTube video livestream?
  58. Can you elaborate on the relation between your work and Tegmark and Davidad’s efforts?
  59. Do the underpinning theories for providing proofs of safety, or quantification of risks exist for current + emerging AI? If not, how and where can we get them?
  60. How divergent is our approach to A.I. safety given its existential import? Are we involving many fields, and considering unconventional problem solving methods?
  61. By letting task execution happen on a symbolic knowledge graph we can visually see all the path that could be taken by the task execution system and decide
  62. How can I write a email to Yoshua Bengio – I think I got a good idea I want to specify in more detail than 200 characters!
  63. What are the most promising tech AI Safety agendas?
  64. “”Understand LLMs”” (evals, interp, …) OR “”Control”” OR “”Make AI solve it”” OR “”Theory”” (Galaxy-brain, …)?”
  65. Symbolic knowledge graph in between perception AI net and Task execution AI net – IS ALL YOU NEED
  66. Can partner with CERAI at IIT Madras – for Research Support (Prof Ravi Balaraman). We have partnerships + they are useful for Responsible AI support and help.
  67. What is your opinion on the fear mongering crowd? People asking for a pause are scared of losing their jobs?
  68. Would you agree that ‘HARM’ is dependent on prioritized values?
  69. Does your safety model consider multiple AGI when some of them competing for resources with humans and other AGIs?
  70. Hi. How are the theorists’ ideas, such as yours, going to be fed into some sort of pipeline actioned by the companies developing this tech?
  71. The symbolic knowledge graph can have the bayesian idea from Bengio by adding coherence with other symbolic concepts.
  72. Yoshua, do you think AI systems need to be siloed from any sort of influence from governments, bad actors/states and from companies, especially from competitors?
  73. Could we leverage our social media platforms with current AI to aid in problem solving of complex problems like climate change & A.I. safety? It’s underutilized.
  74. How is lobbying for AI related to lack of privacy and anatomy for the general public is related?
  75. Is the availability of AI going to impact the education and learning ability of the next generation?
  76. Should we assume coordination failure leading to catastrophic outcome is inevitable and focus resources on how to poison AI systems, some kind of hacking?
  77. Please put my idea with symbolic knowledge graphs as a middle layer and human in the loop at task execution up. I think this can change everything
  78. Do you think our education needs to be updated for the AI era. Not still using 1960 education methods, syllabus etc as confusing next generation
  79. AI is similar to the nuclear field in that, after Hiroshima, it continued with Atoms for Peace (good) and the arms race (bad). AI still didn’t have a Hiroshima.
  80. Why is nobody talking about how the AI alignment theorists’ work is going to feed into the AI development work?? If not, then you are merely a talking shop.
  81. Current LLM models are mostly trained with YouTube and other public data. Organized crime will have snatched an unaligned LLM model and trained it using darkweb
  82. Agree that aligning an LLM is an unsolved, and if solvable probably expensive to solve. The obvious low-cost solution to align AI is: do not use LLM. Comments?
  83. If A.I. becomes increasingly competent will we see a widespread infatuation with A.I. models? Stopping a group is one thing. What if it involves much of humanity?
  84. X-Genners have grown accustomed not to interfere in History’s Natural Progression – Back to Future I-II. Is the AI going to be Paradoxical or Unity of Consciousness?
  85. Where do you stand on the discussions on open source ? I worry we may lose the opportunity to profit from it in terms of improving the lack of democracy ?
  86. Where have you been most surprised in the past couple of years, or where have your views changed the most?
  87. Liability & tort law: re incentives, can we tweak damages? Pay for what happened, but also proportionally penalize taking a clear x% risk that did not manifest.
  88. Could it also be that so many people are benefitting from AI that they don’t want you to stop making it available and further developed?

Which of these questions interest you the most?

Image credit (above): Midjourney imagines audience members disappointed that their questions about AI safety weren’t featured in an otherwise excellent panel discussion.

Leave a Comment »

No comments yet.

RSS feed for comments on this post. TrackBack URI

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.