dw2

26 October 2008

The Singularity will go mainstream

Filed under: AGI, brain simulation, cryonics, Moore's Law, robots, Singularity — David Wood @ 1:49 pm

The concept of the coming technological singularity is going to enter mainstream discourse, and won’t go away. It will stop being something that can be dismissed as freaky or outlandish – something that is of interest only to marginal types and radical thinkers. Instead, it’s going to become something that every serious discussion of the future is going to have to contemplate. Writing a long-term business plan – or a long-term political agenda – without covering the singularity as one of the key topics, is increasingly going to become a sign of incompetence. We can imagine the responses, just a few years from now: “Your plan lacks a section on how the onset of the singularity is going to affect the take-up of your product. So I can’t take this proposal seriously”. And: “You’ve analysed five trends that will impact the future of our company, but you haven’t included the singularity – so everything else you say is suspect.”

In short, that’s the main realisation I reached by attending the Singularity Summit 2008 yesterday, in the Montgomery Theater in San Jose. As the day progressed, the evidence mounted up that the arguments in favour of the singularity will be increasingly persuasive, to wider and wider groups of people. Whether or not the singularity will actually happen is a slightly different question, but it’s no longer going to be possible to dismiss the concept of the singularity as irrelevant or implausible.

To back up my assertion, here are some of the highlights of what was a very full day:

Intel’s CTO and Corporate VP Justin Rattner spoke about “Countdown to Singularity: accelerating the pace of technological innovation at Intel”. He described a series of technological breakthroughs that would be likely to keep Moore’s Law operational until at least 2020, and he listed ideas for how it could be extended even beyond that. Rattner clearly has a deep understanding of the technology of semiconductors.

Dharmendra Modha, the manager of IBM’s cognitive computing lab at Almaden, explained how his lab had already utilised IBM super-computers to simulate an entire rat brain, with the simulation running at one tenth of real-time speed. He explained his reasons for expecting that his lab should be emable to simular an entire human brain, running at full speed, by 2018. This was possible as a result of the confluence of “three hard disruptive trends”:

  1. Neuroscience has matured
  2. Supercomputing meets the brain
  3. Nanotechnology meets the brain.

Cynthia Breazeal, Associate Professor of Media Arts and Sciences, MIT, drew spontaneous applause from the audience part-way through her talk, by showing a video of one of her socially responsive robots, Leonardo. The video showed Leonardo acting on beliefs about what various humans themselves believed (including beliefs that Leonardo could deduce were false). As Breazeal explained:

  • Up till recently, robotics has been about robots interacting with things (such as helping to manufacture cars)
  • In her work, robotics is about robots interacting with people in order to do things. Because humans are profoundly social, these robots will also have to be profoundly social – they are being designed to relate to humans in psychological terms. Hence the expressions of emotion on Leonardo’s face (and the other body language).

Marshall Brain, founder of “How Stuff Works”, also spoke about robots, and the trend for them to take over work tasks previously done by humans: MacDonalds waitresses, Wal-Mart shop assistants, vehicle drivers, construction workers, teachers…

James Miller, Associate Professor of Economics, Smith College, explicitly addressed the topic of how increasing belief in the likelihood of an oncoming singularity would change people’s investment decisions. Once people realise that, within (say) 20-30 years, the world could be transformed into something akin to paradise, with much greater lifespans and with abundant opportunities for extremely rich experiences, many will take much greater care than before to seek to live to reach that event. Interest in cryonics is likely to boom – since people can reason their bodies will only need to be vitrified for a short period of time, rather than having to trust their descendants to look after them for unknown hundreds of years. People will shun dangerous activities. They’ll also avoid locking money into long-term investments. And they’ll abstain from lengthy training courses (for example, to master a foreign language) if they believe that technology will shortly render as irrelevant all the sweat of that arduous learning.

Not every speaker was optimistic. Well-known author and science journalist John Horgan gave examples of where the progress of science and technology has been, not exponential, but flat:

  • nuclear fusion
  • ending infectious diseases
  • Richard Nixon’s “war on cancer”
  • gene therapy treatments
  • treating mental illness.

Horgan chided advocates of the singularity for their use of “rhetoric that is more appropriate to religion than science” – thereby risking damaging the standing of science at a time when science needs as much public support as it can get.

Ray Kurzweil, author of “The Singularity is Near”, responded to this by agreeing that not every technology progresses exponentially. However, those that become information sciences do experience significant growth. As medicine and health increasingly become digital information sciences, they are experiencing the same effect. Although in the past I’ve thought that Kurzweil sometimes overstates his case, on this occasion I thought he spoke with clarity and restraint, and with good evidence to back up his claims. He also presented updated versions of the graphs from his book. In the book, these graphs tended to stop around 2002. The slides Kurzweil showed at the summit continued up to 2007. It does appear that the rate of progress with information sciences is continuing to accelerate.

Earlier in the day, science fiction author and former maths and computing science professor Vernor Vinge gave his own explanation for this continuing progress:

Around the world, in many fields of industry, there are hundreds of thousands of people who are bringing the singularity closer, through the improvements they’re bringing about in their own fields of research – such as enhanced human-computer interfaces. They mainly don’t realise they are advancing the singularity – they’re not working to an agreed overriding vision for their work. Instead, they’re doing what they’re doing because of the enormous incremental economic plus of their work.

Under questioning by CNBC editor and reporter Bob Pisani, Vinge said that he sticks with the forecast he made many years ago, that the singularity would (“barring major human disasters”) happen by 2030. Vinge also noted that rapidly improving technology made the future very hard to predict with any certainty. “Classic trendline analysis is seriously doomed.” Planning should therefore focus on scenario evaluation rather than trend lines. Perhaps unsurprisingly, Vinge suggested that more forecasters should read science fiction, where scenarios can be developed and explored. (Since I’m midway through reading and enjoying Vinge’s own most recent novel, “Rainbows End” – set in 2025 – I agree!)

Director of Research at the Singularity Institute, Ben Goertzel, described a staircase of potential applications for the “OpenCog” system of “Artificial General Intelligence” he has been developing with co-workers (partially funded by Google, via the Google Summer of Code):

  • Teaching virtual dogs to dance
  • Teaching virtual parrots to talk
  • Nurturing virtual babies
  • Training virtual scientists that can read vast swathes of academic papers on your behalf
  • And more…

Founder and CSO of Innerspace Foundation, Pete Estep, gave perhaps one of the most thought-provoking presentations. The goal of Innerspace is, in short, to improve brain functioning. In more detail, “To establish bi-directional communication between the mind and external storage devices.” Quoting from the FAQ on the Innerspace site:

The IF [Innerspace Foundation] is dedicated to the improvement of human mind and memory. Even when the brain operates at peak performance learning is slow and arduous, and memory is limited and faulty. Unfortunately, other of the brain’s important functions are similarly challenged in our complex modern world. As we age, these already limited abilities and faculties erode and fail. The IF supports and accelerates basic and applied research and development for improvements in these areas. The long-term goal of the foundation is to establish relatively seamless two-way communication between people and external devices possessing clear data storage and computational advantages over the human brain.

Estep explained that he was a singularity agnostic: “it’s beyond my intellectual powers to decide if a singularity within 20 years is feasible”. However, he emphasised that it is evident to him that “the singularity might be near”. And this changes everything. Throughout history, and extending round the world even today, “there have been too many baseless fantasies and unreasonable rationalisations about the desirability of death”. The probable imminence of the singularity will help people to “escape” from these mind-binds – and to take a more vigorous and proactive stance towards planning and actually building desirable new technology. The singularity that Estep desires is one, not of super-powerful machine intelligence, but one of “AI+BCI: AI combined with a brain-computer interface”. This echoed words from robotics pioneer Hans Moravec that Vernor Vinge had reported earlier in the day:

“It’s not a singularity if you are riding the curve. And I intend to ride the curve.”

On the question of how to proactively improve the chances for beneficial technological development, Peter Diamandis spoke outstandingly well. He’s the founder of the X-Prize Foundation. I confess I hadn’t previously realised anything like the scale and the accomplishment of this Foundation. It was an eye-opener – as, indeed, was the whole day.

30 August 2008

Anticipating the singularity

Filed under: Moore's Law, Singularity — David Wood @ 10:05 am

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

The first time I read these words, a chill went down my spine. They were written in 1965 by IJ Good, a British statistician who had studied mathematics at Cambridge University pre-war, worked with Alan Turing and others in the highly secret code-breaking labs at Bletchley Park, and was involved in the creation of the Colossus computer (“the world’s first programmable, digital, electronic, computing device“).

The point where computers become better than humans at generating new computers – or (not quite the same thing) the point where AI becomes better than humans at generating new AI – is nowadays often called the singularity (or, sometimes, “the Technological Singularity“). To my mind, it’s a hugely important topic.

The name “Singularity” was proposed by maths professor and science fiction author Vernor Vinge, writing in 1993:

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended…

“When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale…

“From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control…

“I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown…”

If Vinge’s prediction is confirmed, the Singularity will happen within 30 years of 1993, namely by 2023. (He actually says, in his paper, “I’ll be surprised if this event occurs before 2005 or after 2030”.)

Of course, it’s notoriously hard to predict timescales for future technology. Some things turn out to take a lot longer than expected. AI is a prime example. Progress with AI has frequently turned out to be disappointing.

But not all technology predictions turn out bad. The best technology prediction of all time is probably that by Intel co-founder Gordon Moore. Coincidentally writing in 1965 (like IJ Good mentioned above), Moore noted:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer…”

For more than forty years, Moore’s Law has held roughly true – with (as revised by Moore himself) the doubling period taking around 24 months instead of 12 months. And it is this persistent growth in computing power that leads other writers – most famously, Ray Kurzweil – to continue to predict the reasonably imminent onset of the singularity. In his 2005 book “The Singularity Is Near: When Humans Transcend Biology“, Kurzweil picks the date 2045.

Intel’s present-day CTO, Justin Rattner, reviewed some of Kurzweil’s ideas in his keynote on the future of technology at the Intel Developer Forum in San Francisco on the 21st of August. The presentation was called “Crossing the chasm between humans and machines”.

To check what Justin said, you can view the official Intel video available here. There’s also a brief slide-by-slide commentary at the Singularity Hub site, as well as lots of other web coverage (eg here and here). Justin said that the singularity “might be only a few decades away”, and his talk includes examples of the technological breakthroughs that will plausibly be involved in this grander breakthrough.

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Another researcher who puts more emphasis on the potential breakthrough capabilities of the right kind of software, rather than hardware, is Ben Goertzel. Two years ago, he gave a talk entitled “Ten years to the Singularity if we really try.” One year ago, he gave an updated version, “Nine years to the Singularity if we really really try“. Ben suggests that the best place for new AIs to be developed is inside virtual worlds (such as Second Life). He might be right. It wouldn’t be the first time that significant software breakthroughs happened in arenas that mainstream society regards as peripheral or even objectionable.

Even bigger than the question of the plausible timescale of a future technological singularity, is the question of whether we can influence the outcome, to be positive for humanity rather than a disaster. That will be a key topic of the Singularity Summit 2008, which will be held in San Jose on the last Saturday of October.

The speakers at the summit include five of the people I’ve mentioned above:

(And there are 16 other named speakers – including many that I view as truly fascinating thinkers.)

The publicity material for the Singularity Summit 2008 describes the event as follows:

“The Singularity Summit gathers the smartest people around to explore the biggest ideas of our time. Learn where humanity is headed, meet the people leading the way, and leave inspired to create a better world.”

That’s a big claim, but it might just be right.

« Newer Posts

Blog at WordPress.com.