Before I address the transformation of today’s AI into what researchers often call AGI – Artificial General Intelligence – let me ask a question about the Italian cities of Pompeii and Herculaneum.
In which year, during the Roman empire, were these two cities severely damaged by an earthquake?
What’s your answer?
If you remember your history lessons from schooldays, you might be tempted to suggest the date 79 AD.
But the correct answer to my question is seventeen years earlier, namely 62 AD. That’s when an earthquake caused extensive damage to buildings in these two cities, and (it is reported) the death of a flock of 600 sheep.
Images of that damage – and subsequent repairs – were recorded on some contemporaneous sculptures discovered by later generations of archaeologists.
What happened in 79 AD was something altogether more explosive, namely the volcanic eruption of Mount Vesuvius. Over the course of two days, that eruption released 100,000 times the thermal energy of the atomic bombings of Hiroshima and Nagasaki. No wonder it caused so much devastation.
The 79 AD eruption caught the inhabitants of Pompeii and Herculaneum woefully unprepared. Prior to the eruption, it seems that no-one suspected that Vesuvius might unleash such a catastrophe. The concept of a volcano as it is understood today was barely part of their thinking. As for earthquakes, they were generally thought to be caused by movements of the air.
Indeed, historical records and archaeological findings suggest that the citizens of Pompeii and Herculaneum considered Vesuvius to be a rather ordinary mountain. They went about their daily lives, engaging in commerce, farming, and enjoying the amenities of a prosperous Roman city, without fear of the looming disaster. It wasn’t until the eruption began that they realized the true nature of the mountain.
Can you see where this line of analogy is going…?
Arguably, the world has already had its 62 AD moment with AI systems, namely, as the effects of large language models have swept around the world over the last 24 months.
What might that early “earthquake” foretell?
Just as the citizens of Pompeii and Herculaneum were ignorant of the turbulent geodynamics that were accumulating underneath Mount Vesuvius, most modern-day citizens have at best a hazy understanding of the intense pressures and heady temperatures that are accumulating within ever more capable AI systems.
So what if 600 sheep perished, people shrug. They were the victims of bad air. Or something. Nothing that should cause any change in how society is structured. Let’s keep calm and carry on.
So what if the emperor Nero himself was briefly interrupted, while singing in a theatre in nearby Naples in 64 AD by a minor earthquake that modern researchers consider to be another pre-shock ahead of the 79 AD cataclysm. Nero took these tremors in his stride, and continued singing.
It’s as if world leaders briefly convened to discuss the potential dangers of next generation AI, in the UK in November 2023 and again in Seoul in May 2024, but then resumed singing (or whatever else world leaders tend to do).
To be clear, in this analogy, I’m comparing the 79 AD eruption to what may happen when AI systems reach a certain level of generality – when these systems are sufficiently intelligent that they can take over the task of improving themselves whilst needing (if anything) little additional input from human developers.
Ahead of that moment, AI systems emit a stream of rumbles, but the world mainly carries on with business as usual (and distractions as usual). But when AI systems reach a sufficient accumulation of various sorts of intelligence – when they “reach critical mass”, to use a different metaphor – it will probably be too late for the citizens of the world to alter the trajectory of the subsequent intelligence explosion.
Just like the citizens of Pompeii and Herculaneum in 79 AD, observing the extraordinary fireworks raining down in their direction, they may have a brief “OMFG” moment, before losing all control of what happens next.
Two types of questions about the transition from AI to AGI
Many readers will be sceptical about the above analogy. Indeed, they may have two types of question about the potential transition from AI to AGI:
- Questions over desirability
- Questions over credibility
The first type of question concern the implications of the arrival of AGI. Does AGI really have the potential to cause catastrophic damage – deep damage to human social systems, employment systems, political systems, the military balance of power, the environment, and even the survival of humanity? Does it also have the potential for many wonderful outcomes – vastly improved social systems, economic relationships, collaborative politics, global peace, a sustainable superabundance, and widespread humanitarian flourishing?
The second type of question concern the mechanism of the arrival of AGI. These questions take issue with whether we even need to address the first type of question. What’s at stake here is the possibility that AI won’t be able to reach the level of AGI – at least, not any time soon. That scepticism draws strength from observing the many flaws and defects of present-day AI systems.
Above, I spoke of “the intense pressures and heady temperatures that are accumulating in ever more capable AI systems”. But sceptics see no route from today’s bug-laden AI systems to hypothesised future AIs that could outperform humans in all cognitive tasks (however general). In this sceptical view, the “pressures” and “temperatures” will likely prove unproductive. They may produce hype, inflated share prices, and lots of idle chatter, but the world should basically continue with business as usual (and distractions as usual).
Opposing these sceptics there is not just a single counter but multiple.
That is, there is not just one proposal for how AI could be transformed into AGI, but many such proposals.
There’s not just a single suggested technical architecture, or a single suggested optimal project framework, for AGI, but many architectures and project frameworks that have been proposed.
Some of these proposals build upon the recent initiatives in large language models:
- They may suggest that all that’s needed for AGI to emerge is to scale up these models – more parameters, more training data, refinements to the transformer architecture, and so on
- In other cases, they may envision combinations of large language models with alternative ideas in AI, such as probabilistic reasoning, evolutionary algorithms, explicit world modelling, and more.
Other proposals involve
- Changes in the hardware involved, or changes in the network architecture
- More attention on the distinction between correlation and causation
- Adoption of quantum computing – quantum hardware and/or quantum software
- New ideas from the frontiers of mathematics
- New insights from how the human brain seems to operate
- Paying special attention to the notion of consciousness
- The emergence of AGI from a rich interactive network of simpler modules of intelligence
If you’re interested in getting your head around some of the different technical architectures arising, and in exploring how they compare, you should consider attending AGI 2024, which is taking place in Seattle next month. For the details, read on.
Alternatively, your own preference may be to focus on the desirability questions. Well, in that case, I have a message for you. The desirability questions cannot be entirely separated from the credibility questions. Discussions of the potential implications of AGI need to be grounded in confidence that the kind of AGI being envisioned is something that could credibly arise and exist. And therefore, again, I suggest that you should consider attending AGI 2024.
It’s all happening in Seattle, 13-16 August
The AGI 2024 website describes the annual AGI conference as “the foremost conference dedicated entirely to the latest AGI research”.
It goes on to state:
There is a growing recognition, in the AI field and beyond that the threshold of achieving AGI — where machine intelligence matches or even surpasses human intelligence — is within our near-term reach.
This increases the importance of building our fundamental understanding of what AGI is and can be, and what are the various routes for getting there.
The 2024 conference is designed to explore a number of fundamental questions:
- What kinds of AGI systems may be feasible to create in the near term, with what relative strengths and weaknesses?
- What are the biggest unsolved problems we need to resolve to get to human-level AGI?
- What are the right ways to measure the capabilities of AI systems as they approach and then exceed human-level AGI?
- What are the most feasible methodologies to provide emerging AGI systems with guidance, both cognitively and ethically, as they develop?
- How should we think about the roles of industry, government, academia and the open source community in the next stages of development toward AGI?
- How will the advent of AGI alter the fabric of society, our jobs, and our daily lives?
The keynote speakers listed include:
- Ben Goertzel – Chairman, AGI Society, and CEO, SingularityNET
- Ray Kurzweil – CEO and Owner, Kurzweil Technologies, and Director of Engineering, Google
- François Chollet – Software Engineer, AI Researcher, and Senior Staff Engineer, Google
- Geordie Rose – Founder and CEO, Sanctuary.ai
- John Laird – Professor of Computer Science and Engineering, University of Michigan
- Joscha Bach – AI Strategist, Liquid AI
- Gary Marcus – Professor of Psychology and Neural Science, New York University
- Rachel St. Clair – CEO Simuli
- Christof Koch – Neurophysiologist, Computational Neuroscientist, Allen Institute
- Paul Rosenbloom – Professor Emeritus of Computer Science, University of Southern California
- Alex Ororbia – Assistant Professor, RIT, and Director, Neural Adaptive Computing (NAC) Laboratory
- David Spivak – Senior Scientist and Institute Fellow, Topos Institute
- Josef Urban – Head of AI, Czech Institute of Informatics, Robotics and Cybernetics (CIIRC)
- Alexey Potapov – Chief AGI Officer, SingularityNET
- Patrick Hammer – Postdoc Researcher, KTH Division of Robotics, Perception and Learning
- David Hanson – Founder and CEO, Hanson Robotics
There’s also:
- A choice between five “deep dive” workshops, taking place on the first day of the conference
- Around 18 shorter papers, including by some very bright researchers at earlier stages in their career
- A range of posters exploring additional ideas.
It’s a remarkable line-up.
The venue is the Hub (Husky Union Building) on the campus of the University of Washington in Seattle.
For full details of AGI 2024, including options to register either to attend physically or to view some of the sessions remotely, visit the event website.
I’ll look out for any friends and colleagues that I spot there. It will be an important opportunity to collectively think harder about ensuring the outcome of AGI is incredibly positive rather than disastrously negative.
Image credit: Midjourney.
