dw2

30 August 2008

Anticipating the singularity

Filed under: Moore's Law, Singularity — David Wood @ 10:05 am

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

The first time I read these words, a chill went down my spine. They were written in 1965 by IJ Good, a British statistician who had studied mathematics at Cambridge University pre-war, worked with Alan Turing and others in the highly secret code-breaking labs at Bletchley Park, and was involved in the creation of the Colossus computer (“the world’s first programmable, digital, electronic, computing device“).

The point where computers become better than humans at generating new computers – or (not quite the same thing) the point where AI becomes better than humans at generating new AI – is nowadays often called the singularity (or, sometimes, “the Technological Singularity“). To my mind, it’s a hugely important topic.

The name “Singularity” was proposed by maths professor and science fiction author Vernor Vinge, writing in 1993:

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended…

“When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale…

“From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control…

“I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown…”

If Vinge’s prediction is confirmed, the Singularity will happen within 30 years of 1993, namely by 2023. (He actually says, in his paper, “I’ll be surprised if this event occurs before 2005 or after 2030”.)

Of course, it’s notoriously hard to predict timescales for future technology. Some things turn out to take a lot longer than expected. AI is a prime example. Progress with AI has frequently turned out to be disappointing.

But not all technology predictions turn out bad. The best technology prediction of all time is probably that by Intel co-founder Gordon Moore. Coincidentally writing in 1965 (like IJ Good mentioned above), Moore noted:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer…”

For more than forty years, Moore’s Law has held roughly true – with (as revised by Moore himself) the doubling period taking around 24 months instead of 12 months. And it is this persistent growth in computing power that leads other writers – most famously, Ray Kurzweil – to continue to predict the reasonably imminent onset of the singularity. In his 2005 book “The Singularity Is Near: When Humans Transcend Biology“, Kurzweil picks the date 2045.

Intel’s present-day CTO, Justin Rattner, reviewed some of Kurzweil’s ideas in his keynote on the future of technology at the Intel Developer Forum in San Francisco on the 21st of August. The presentation was called “Crossing the chasm between humans and machines”.

To check what Justin said, you can view the official Intel video available here. There’s also a brief slide-by-slide commentary at the Singularity Hub site, as well as lots of other web coverage (eg here and here). Justin said that the singularity “might be only a few decades away”, and his talk includes examples of the technological breakthroughs that will plausibly be involved in this grander breakthrough.

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Another researcher who puts more emphasis on the potential breakthrough capabilities of the right kind of software, rather than hardware, is Ben Goertzel. Two years ago, he gave a talk entitled “Ten years to the Singularity if we really try.” One year ago, he gave an updated version, “Nine years to the Singularity if we really really try“. Ben suggests that the best place for new AIs to be developed is inside virtual worlds (such as Second Life). He might be right. It wouldn’t be the first time that significant software breakthroughs happened in arenas that mainstream society regards as peripheral or even objectionable.

Even bigger than the question of the plausible timescale of a future technological singularity, is the question of whether we can influence the outcome, to be positive for humanity rather than a disaster. That will be a key topic of the Singularity Summit 2008, which will be held in San Jose on the last Saturday of October.

The speakers at the summit include five of the people I’ve mentioned above:

(And there are 16 other named speakers – including many that I view as truly fascinating thinkers.)

The publicity material for the Singularity Summit 2008 describes the event as follows:

“The Singularity Summit gathers the smartest people around to explore the biggest ideas of our time. Learn where humanity is headed, meet the people leading the way, and leave inspired to create a better world.”

That’s a big claim, but it might just be right.

9 Comments »

  1. Unlike gazillionaire Kurzweil, I find this to be a singularly unappealing future. Maybe I read too many scifi books as a kid — or maybe it‘s from watching the movie Colossus: The Forbin Project at an impressionable age.

    But either way, the idea of a sentient computer more capable than humans is about as threatening as a Terminator movie. And, alas, a lot more imminent.

    Comment by Joel West — 31 August 2008 @ 4:41 am

  2. Hi Joel,

    >"I find this to be a singularly unappealing future. Maybe I read too many scifi books as a kid — or maybe it‘s from watching the movie Colossus: The Forbin Project at an impressionable age. But either way, the idea of a sentient computer more capable than humans is about as threatening as a Terminator movie…”

    If the Singularity goes bad, I doubt the outcome will be remotely like any of these Holywood movies – in which humans still manage to have a chance against malignant super-intelligent machines, by making (what shall I say?) a super-human effort. The outcome will probably be much, much worse. The machines (by dint of rapid self-improvement) will be way more intelligent and powerful than even the most powerful Hollywood actor. It will be game over, very quickly indeed. That’s Singularity Hell.

    >"Unlike gazillionaire…"

    If the Singularity goes well, it won’t just be the wealthy who benefit. Radically improved AI could in principle devise and manufacture very low cost super-drugs that will cure diseases worldwide. They could also design weather control mechanisms, to safely defuse hurricanes. Etc. That’s a hint of Singularity Heaven.

    >"…Kurzweil"

    Kurzweil is a populist and sometimes engaging writer but (needless to say) he’s not to everyone’s taste. Kurzweil is nominated by Washington Post writer Joel Garreau as being the iconic spokesperson for “The Heaven Scenario” prediction for what the radical development and application of new technology can achieve. This is in Garreau’s 2006 book “Radical evolution – the promise and peril of enhancing our minds, our bodies – and what it means to be human”. Personally, I find Garreau considerably more convincing than Kurzweil.

    Garreau nominates Bill Joy of Sun Microsystems as the iconic spokeperson for “The Hell Scenario”, and virtual reality pioneer Jaron Lanier for “The Prevail Scenario”, which is a sort of (not completely convincing) middle path.

    // dw2-0

    Comment by David Wood — 31 August 2008 @ 2:04 pm

  3. As usual, great stuff David. And what a fascinating topic.

    There is one fundamental flaw in this “intelligence explosion” logic, though. That is, evolution does not favor intelligence by itself.

    It is just human arrogance to think that human intelligence is the perfection – and an inevitable result – of evolution. In fact, if you look at the natural world, all kinds of bacteria and insects are much more successful than the human race (in terms of absolute numbers or biomass). And if human race commits a mass suicide by nuclear holocaust, cockroaches might still survive and continue happily their non-intelligent evolution. And finally, human intelligence does not improve by evolution, unless it helps the individual’s chances of survival and reproduction: http://www.imdb.com/title/tt0387808.

    In evolutionary systems, whether biological or AI, intelligence increases *only* if it helps the evolutionary unit’s chances of survival and reproduction.

    And this is the key.

    In computing networks, does it?

    Ps. assuming that the intelligence explosion singularity happens someday in the connected computing network we are building, wouldn’t it cause a bandwidth and power-consumption explosion too, bringing down the whole system? Doesn’t this mean that the first beings on this planet of higher intelligence/consciousness are destined to a very short and miserable life? And doesn’t it mean that we wouldn’t even know that the singularity happened, and the news gets reported only as “The Great Internet collapse of 2046”. Hehe.

    Comment by Tommi Vilkamo — 1 September 2008 @ 6:46 am

  4. A truly fascinating topic but I have to agree with Tommi, not at all likely.

    The two main reasons for this as I see them are:

    1) We really don’t understand intelligence and the field of AI has really only made baby steps since its inception. The only way we can possibly speed up this process is if the machines start learning for themselves and evolving their own representations. Machine learing is progressing but I haven’t seen anything like a starting point for this process appearing (not to say that it won’t, but software would have to get us there while faster hardware never can, it would just speed up the onset of the singularity once the process began).

    2) If there were a sentient machine that was more intelligent than us then might it just NOT design a more intelligent machine which would eventually see its creator as a waste of resources and destroy it. i.e. It’s quite likely that a machine smarter than humans would be smart enough not to bring about its own destruction, since most humans are already smart enough for this (a few hard AI nuts not included).

    Personally I believe that progress in AI is likely to remain domain specific for longer than any of the predictions you’ve given here. Researchers in the field have to be wildly optimistic to get any funding.

    Comment by aYrftDcJl9gp4LIqLhPqwLJrfgHC2lI- — 1 September 2008 @ 1:38 pm

  5. Hi Tommi,

    >"evolution does not favor intelligence by itself. It is just human arrogance to think that human intelligence is the perfection – and an inevitable result – of evolution."

    I completely agree. My belief that we should pay serious attention to the possibility of the Singularity (with its very significant spurt in intelligence) doesn’t depend on any idea that evolution naturally leads to more intelligence. Instead, I see other more concrete factors at work, such as regular increases in hardware power, and (occasionally!) improved software algorithms. The risk is that these factors will result in the creation of superhuman intelligence.

    This spurt of intelligence may or may not have been intended, by the researchers whose work resulted in this breakthrough. For an analogy, consider people introducing animal or plant species from one part of the world, to another part. Sometimes there are unexpected side-effects, with drastic consequences. We could see something similar from researchers who are merely experimenting with new algorithms, without realising their full consequences.

    >"assuming that the intelligence explosion singularity happens someday in the connected computing network we are building, wouldn't it cause a bandwidth and power-consumption explosion too, bringing down the whole system?"

    That’s one possible outcome. But another is that Singularity-level intelligence will (almost instantaneously) figure out radically improved new communications protocols that reduce the actual bulk of data flow, and reduce power consumption.

    >"assuming that the intelligence explosion singularity happens someday in the connected computing network we are building"

    By the way, that’s only one of five possible ways in which the Singularity could occur – according to the most recent paper by Vernor Vinge, “Signs of the Singularity”.

    // dw2-0

    Comment by David Wood — 1 September 2008 @ 7:14 pm

  6. Hi aYrftDcJl9gp4LIqLhPqwLJrfgHC2lI-

    >"We really don't understand intelligence and the field of AI has really only made baby steps since its inception."

    That may be true, but it doesn’t mean that AI can never progress to the level of superhuman intelligence. Nor does it mean that progress to superhuman levels is necessarily going to take many, many decades.

    >"The only way we can possibly speed up this process is if the machines start learning for themselves and evolving their own representations"

    I believe many AI researchers will agree with you here, and will claim that they are already working on such systems. Other AI researchers might say that other routes are also worth exploring.

    Because I can’t rule out in principle the possibility that researchers might stumble upon significantly improved systems or algorithms, I believe it’s well worth us trying to think ahead about that very possibility. I expect that’s what a lot of the discussion at Singularity Summit 2008 will be about.

    >"If there were a sentient machine that was more intelligent than us then might it just NOT design a more intelligent machine which would eventually see its creator as a waste of resources and destroy it."

    The research program of “Friendly AI” is designed to seek to avoid that possibility, by means of ensuring that the AI which eventually reaches superhuman intelligence will have friendliness towards humans built into it at a very deep level. It’s a tough challenge, but it has some plausibility.

    // dw2-0

    Comment by David Wood — 1 September 2008 @ 7:30 pm

  7. > By the way, that's only one of five possible ways in which the Singularity could occur – according to the most recent paper by Vernor Vinge, "Signs of the Singularity".

    Uh. Made me think. Thanks for the link.

    Comment by Tommi Vilkamo — 2 September 2008 @ 6:33 am

  8. > By the way, that's only one of five possible ways in which the Singularity could occur – according to the most recent paper by Vernor Vinge, "Signs of the Singularity".

    Uh. Made me think. Thanks for the link.

    Comment by Tommi Vilkamo — 2 September 2008 @ 6:33 am

  9. Hi David,

    Sorry, aYrftDcJl9gp4LIqLhPqwLJrfgHC2lI- is the OpenID that Yahoo gave me by default before I made it use a sensible one.

    I’m extremely interested in AI, having studied at university at a postgrad level. I’m still very pessimistic about it though. People have been saying the same things since the beginning and there really isn’t much progress – it’s an incredibly hard problem. I agree that doesn’t mean there will never be any and SOME people should occasionally debate things like the singularity because the possible consequences are enormous, even if the probability is extremely low. There is a similar situation for some of the frightening advances in nanotechnology that have been postulated.

    Also, imagining what a hypothetical super-intelligence could do to help humanity may well lead to very promising research directions for domain-specific AI.

    My last thought for this thread is that there are already a lot of intelligences way beyond the average human intelligence on the planet. They mostly sit around pondering the meaning of the universe via theoretical physics or some other problems in advanced abstract mathematics. Very occasionally something they do is picked up on by slightly less advanced intelligences and turned into technologies that do make changes to the world – usually positive but occasionally extremely destructive (Hiroshima?!). Most of the time, the world is competely ignorant of their thoughts and actions and carries on unchanged. Might not a similar thing happen with artificial super-intelligences. Is it possible that they may be capable of doing all sorts of things for or against humanity but most of the time they just aren’t interested – a bit like us with ants?

    Mark

    P.S. Thanks for writing about so many diverse and interesting topics on this blog, it’s a really good read.

    Comment by m_p_wilcox — 2 September 2008 @ 2:19 pm


RSS feed for comments on this post. TrackBack URI

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.