dw2

2 November 2009

Halloween nightmare scenario, early 2020’s

Filed under: AGI, friendly AI, Singularity, UKH+, UKTA — David Wood @ 5:37 pm

On the afternoon of Halloween 2009, Shane Legg ran through a wide-ranging set of material in his presentation “Machine Super Intelligence” to an audience of 50 people at the UKH+ meeting in Birkbeck College.

Slide 43 of 43 was the climax.  (The slides are available from Shane’s website, where you can also find links to YouTube videos of the event.)

It may be unfair of me to focus on the climax, but I believe it deserves a lot of attention.

Spoiler alert!

The climactic slide was entitled “A vision of the early 2020’s: the Halloween Scenario“.  It listed three assumptions about what will be the case by the early 2020’s, drew two conclusions, and then highlighted one big problem.

  1. First assumption – desktop computers with petaflop computing power will be widely available;
  2. Second assumption – AI researchers will have established powerful algorithms that explain and replicate deep belief networks;
  3. Brain reinforcement learning will be fairly well understood.

The first assumption is a fairly modest extrapolation of current trends in computing, and isn’t particularly contentious.

The second assumption was, in effect, the implication of around the first 30 slides of Shane’s talk, taking around 100 minutes of presentation time (interspersed with lots of audience Q&A, as typical at UKH+ meetings).  People can follow the references from Shane’s talk (and in other material on his website) to decide whether they agree.

For example (from slides 25-26), an implementation of a machine intelligence algorithm called MC-AIXI can already learn to solve or play:

  • simple prediction problems
  • Tic-Tac-Toe
  • Paper-Scissors-Rock (a good example of a non-deterministic game)
  • mazes where it can only see locally
  • various types of Tiger games
  • simple computer games, e.g. Pac-Man

and is now being taught to learn checkers (also known as draughts).  Chess will be the next step.  Note that this algorithm does not start off with the rules of best practice for these games built in (that is, it is not a specific AI program), but it can work out best practice for these games from its general intelligence.

The third assumption was the implication of the remaining 12 slides, in which Shane described (amongst other topics) work on something called “restricted Boltzmann machines“.

As stated in slide 38, on brain reinforcement learning (RL):

This area of research is currently progressing very quickly.

New genetically modified mice allow researchers to precisely turn on and off different parts of the brain’s RL system in order to identify the functional roles of the parts.

I’ve asked a number of researchers in this area:

  • “Will we have a good understanding of the RL system in the brain before 2020?”

Typical answer:

  • “Oh, we should understand it well before then. Indeed, we have a decent outline of the system already.”

Adding up these three assumptions, the first conclusion is:

  • Many research groups will be working on brain-like AGI architectures

The second conclusion is that, inevitably:

  • Some of these groups will demonstrate some promising results, and will be granted access to the super-computers of the time – which will, by then, be exaflop.

But of course, it’s when some almost human-level AGI algorithms, on petaflop computers, are let loose on exaflop supercomputers, that machine super intelligence might suddenly come into being – with results that might be completely unpredictable.

On the other hand, Shane observes that people who are working on the program of Friendly AI do not expect to have made significant progress in the same timescale:

  • By the early 2020’s, there will be no practical theory of Friendly AI.

Recall that the goal of Friendly AI is to devise a framework for AI research that will ensure that any resulting AIs have a very high level of safety for humanity no matter how super-intelligent they may become.  In this school of thought, after some time, all AI research would be constrained to adopt this framework, in order to avoid the risk of a catastrophic super-intelligence explosion.  However, at the end of Shane’s slides, the likelihood appears that the Friendly AI framework won’t be in place by the time we need it.

And that’s the Halloween nightmare scenario.

How should we respond to this scenario?

One response is to seek to somehow transfer the weight of AI research away from other forms of AGI (such as MC-AIXI) into Friendly AI?  This appears to be very hard, especially since research proceeds independently, in many different parts of the world.

A second response is to find reasons to believe that the Friendly AI project will have more time to succeed – in order words, reasons to believe that AGI will take longer to materialise than the date of the 2020’s mentioned above.  But given the progress that appears to be happening, that seems to me a reckless course of action.

Footnote: If anyone thinks they can make a good presentation on the topic of Friendly AI to a forthcoming UKH+ meeting, please get in touch!

18 October 2009

Influencer – the power to change anything

Filed under: books, catalysts, communications, Singularity — David Wood @ 12:48 am

Are people in general dominated by unreason?  Are there effective ways to influence changes in behaviour, for good, despite the irrationality and other obstacles to change?

Here’s an example quoted by Eliezer Yudkowsky in his presentation Cognitive Biases and Giant Risks at the Singularity Summit earlier this month.  The original research was carried out by behavioural economists Amos Tversky and Daniel Kahneman in 1982:

115 professional analysts, employed by industry, universities, or research institutes, were randomly divided into two different experimental groups who were then asked to rate the probability of two different statements, each group seeing only one statement:

  1. “A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”
  2. “A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983.”

Estimates of probability were low for both statements, but significantly lower for the first group (1%) than the second (4%).

The moral?  Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable. (The cessation of diplomatic relations could happen for all kinds of reasons, not just in response to the invasion. So the first statement must, in rationality, be more probable than the second.)

Eliezer’s talk continued with further examples of this “Conjunction fallacy” and other examples of persistent fallacies of human reasoning.  As summarised by New Atlantis blogger Ari N. Schulman:

People are bad at analyzing what is really a risk, particularly for things that are more long-term or not as immediately frightening, like stomach cancer versus homicide; people think the latter is a much bigger killer than it is.

This is particularly important with the risk of extinction, because it’s subject to all sorts of logical fallacies: the conjunction fallacy; scope insensitivity (it’s hard for us to fathom scale); availability (no one remembers an extinction event); imaginability (it’s hard for us to imagine future technology); and conformity (such as the bystander effect, where people are less likely to render help in a crowd).

Yudkowsky concludes by asking, why are we as a nation spending millions on football when we’re spending so little on all different sorts of existential threats? We are, he concludes, crazy.

It was a pessimistic presentation.  It was followed by a panel discussion featuring Eliezer, life extension researcher Aubrey de Grey, entrepreneur and venture capitalist Peter Thiel, and Singularity Institute president Michael Vassar.  One sub-current of the discussion was: given how irrational people tend to be as a whole, how can we get the public to pay attention to the important themes being addressed at this event?

The answers I heard were variants of “try harder”, “find ways to embarass people”, and “find some well-liked popular figure who would become a Singularity champion”.  I was unconvinced. (Though the third of these ideas has some merit – as I’ll revisit at the end of this article.)

For a much more constructive approach, I recommend the ideas in the very fine book I’ve just finished reading: Influencer: the power to change anything.

No less than five people are named as co-authors: Kerry Patterson, Joseph Grenny, David Maxfield, Ron McMillan, and Al Switzler.  It’s a grand collaborative effort.

For a good idea of the scope of the book, here’s an extract from the related website, http://influencerbook.com:

When it comes to influence we stink. Consider these examples:

  • Companies spend more than $300 billion annually for training and less than 10 percent of what people are taught sticks.
  • Dieters spend $40 billion a year and 19 out of 20 lose nothing but their money.
  • Two out of three criminals are rearrested within three years.

If influence is the capacity to help ourselves and others change behavior, then we all want influence, but few know how to get it.

Influencer delivers a powerful new science of influence that draws from the skills of hundreds of successful change agents combined with more than five decades of the best social science research. The book delivers a coherent and portable model for changing behaviors—a model that anyone can learn and apply.

The key to successful influence lies in three powerful principles:

  • Identify a handful of high-leverage behaviors that lead to rapid and profound change.
  • Use personal and vicarious experience to change thoughts and actions.
  • Marshall multiple sources of influence to make change inevitable.

As I worked through chapter after chapter, I kept thinking “Aha…” to myself.  The material is backed up by extensive academic research by change specialists such as Albert Bandura and Brian Wansink.  There are also numerous references to successful real-life influence programs, such as the eradication of guinea worm diseasee in sub-saharan Africa, controlling AIDS in Thailand, and the work of Mimi Silbert of Delancy Street with “substance abusers, ex-convicts, homeless and others who have hit bottom”.

The book starts by noting that we are, in effect, too often resigned to a state of helplessness, as covered by the “acceptance clause” of the so-called “serenity prayer” of Reinhold Niebuhr

God grant me the serenity
To accept the things I cannot change;
Courage to change the things I can;
And wisdom to know the difference

What we lack, the book says, is the skillset to be able to change more things.  It’s not a matter of exhorting people to “try harder”.  Nor is a matter that we need to become better in talking to people, to convince them of the need to change.  Instead, we need a better framework for how influence can be successful.

Part of the framework is to take the time to learn about the “handful of high-leverage behaviors” that, if changed, would have the biggest impact.  This is a matter of focusing – leaving out many possibilities in order to target behaviours with the greatest leverage.  Another part of the framework initially seems the opposite: it recommends that we prepare to use a large array of different influence methods (all with the same intended result).  These influence methods start by recognising the realities of human reasoning, and works with these realities, rather than seeking to drastically re-write them.

The framework describes six sources of influence, in a 2×3 matrix.  One set of three sources addresses motivation, and the other set of three addresses capability.  In each case, there are personal, social, and structural approaches (hence the 2×3).  The book has a separate chapter for each of these six sources.  Each chapter is full of good material.

  • For example, the section on personal motivation analyses the idea of “making the undesirable desirable”
  • The section on social motivation analyses “the positive power of peer pressure”
  • The section on structural motivation recognises the potential power of extrinsic rewards systems, but insists that they come third: you need to have the personal and social motivators in place first
  • Personal ability: new behaviour requires new skills, which need regular practice
  • Social ability: finding strength in numbers
  • Structural ability: change the environment: harness the invisible and pervasive power of environment to support new behaviour.

Rather than bemoaning the fact that making a story more specific messes up people’s abilities to calculate probabilities rationally, the book has several examples of how stories (especially soap operas broadcast in the third world) can have very powerful influence effects, in changing social behaviours for the better.  Listeners are able to personally identify with the characters in the stories, with good outcomes.

The section on social motivation revisits the famous “technology adoption” lifecycle curve, originally drawn by Everett Rogers:

This curve is famous inside the technology industry.  Like many other, I learned of it via the “Crossing the chasm” series of books by Geoffrey Moore (who, incidentally, is one of the keynote speakers on day 2 of the Symbian Exchange and Expo, on Oct 28th).  Moore draws the same curve, but with a large gap (“chasm”) in it, where numerous hi-tech companies fail:

However, the analysis of this curve in “Influencer” focused instead on the difference between “Innovators” and “Early adopters”.  The innovators may be the first to adopt a new technology – whether it be a new type of seed (as studied by Everett Rogers), a new hi-tech product (as studied by Geoffrey Moore), or an understanding of the importance of the Singularity.  However, they are bad references as far as the remainder of the population are concerned.  They probably are perceived as dressing strangely, holding strange beliefs and customs, and generally not being “one of us”.  If they adopt something, it doesn’t increase the probability of anyone in the majority of the population being impressed.  If anything, they’re likely to be un-impressed as a result. It’s only when people who are seen as more representative of the mainstream adopt a product, that this fact becomes influential to the wider population.

As Singularity enthusiasts reflect on how to gain wider influence over public discussion, they would do well to take to heart the lessons of “Influencer: the power to change anything”.

Footnote: recommended further reading:

Two other books I’ve read over the years made a similar impact on me, as regards their insight over influence:

Another two good books on how humans are “predictably irrational”:

4 October 2009

The Leakproof Singularity and Simulation

Filed under: simulation, Singularity, uploading — David Wood @ 11:18 am

One talk on day one (yesterday) of the 2009 Singularity Summit made me sit bolt upright in my seat, very glad to be listening to it – my mind excitedly turning over important new ideas.

With my apologies to the other speakers – who mainly covered material I’d heard on other occasions, or who mixed a few interesting points among weaker material (or who, in quite a few cases, were poor practitioners of PowerPoint and the mechanics of public speaking) – I have no hesitation in naming David Chalmers, Professor of Philosophy and Director of the Centre for Consciousness at the Australian National University, as the star speaker of the day.

I see that my assessment is shared by New Atlantis assistant editor Ari N. Schulman, in his review of day one, “One day closer to the Singularity“:

far and away the best talk of the day was from David Chalmers. He cut right to the core of the salient issues in determining whether the Singularity will happen

You can get a gist of the talk from Ari’s write-up of it.  I don’t think the slides are available online (yet), but here’s a summary of some of the content.

First, the talk brought a philosopher’s clarity to analysing the core argument for the inevitability of the technological singularity, as originally expressed in 1965 by British statistician IJ Good:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

Over the course of several slides, Chalmers broke down the underlying argument, defining and examining concepts such as

  • “AI” (human-level intelligence),
  • “AI+” (greater than human-level intelligence),
  • and “AI++” (much greater than human-level intelligence)

Along the way, he looked at what it is about intelligence that might cause intelligence itself to grow (a precursor to it “exploding”).  He considered four mechanisms for extensibly improving intelligence:

  • “direct programming” (which he said was “really hard”)
  • “brain emulation” (“not extendible”)
  • “learning” (“still hard”)
  • “simulated evolution” (“where my money is”).

Evolution was how intelligence came about so far.  Evolution inside an improved, accelerated environment could be how intelligence goes far beyond its present capabilities.  In other words, a virtual reality (created and monitored by humans) could be where first AI+ and then AI++ takes place.

Not only is this the most plausible route to AI++, Chalmers argued, but it’s the safest route: a route by which the effects of the intelligence explosion can be controlled.  He introduced the concept of a “leakproof singularity”:

  • create AI in simulated worlds
  • no red pills (one of several references to the film “The Matrix”)
  • no external input
  • go slow

Being leakproof is essential to prevent the powerful super-intelligence created inside the simulation from breaking out and (most likely) wreaking havoc on our own world (as covered in the first talk of the day, “Shaping the Intelligence Explosion”, by Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence).  We need to be able to observe what is happening inside the simulation, but the simulated intelligences must not be able to discern our reactions to what they are doing.  Otherwise they could use their super-intelligence to manipulate us and persuade us (against our best interests) to let them out of the box.

To quote Chalmers,

“The key to controllable singularity is preventing information from leaking in”

Once super-intelligence has occurred within the simulation, what would we humans want to do about it?  Chalmers offered a range of choices, before selecting and defending “uploading” – we would want to insert enhanced versions of ourselves into this new universe.  Chalmers also reviewed the likelihood that the super-intelligences created could, one day, have sufficient ability to re-create those humans who had died before the singularity took place, but for whom sufficient records existed that would allow faithful reconstruction.

That’s powerful stuff (and there’s a lot more, which I’ve omitted, for now, for lack of time).  But as the talk proceeded, another set of powerful ideas constantly lurked in the background.  Our own universe may be exactly the kind of simulated “virtual-reality” creation that Chalmers was describing.

Further reading: For more online coverage of the idea of the leakproof singularity, see PopSci.com.  For a far-ranging science fiction exploration of similar ideas, I recommend Greg Egan’s book Permutation City.  See also the David Chalmers’ paper “The Matrix as metaphysics“.

3 October 2009

Shaping the intelligence explosion

Filed under: Singularity — David Wood @ 2:38 pm

Here’s a quick summary of the central messages from Anna Salamon, Research Fellow at the Singularity Institute for Artificial Intelligence, in her talk “Shaping the intelligence explosion”.  She makes four key claims.  Note: these claims don’t depend on specific details of technology.

1. Intelligence can radically transform the world

Intelligence is like leverage.  Think of Archimedes moving the whole world, with a sufficiently large lever.

The smarter a given agent is, the more scope it has to find ways to achieve its goal.

Imagine intelligence that is further beyond human-scale intelligence, similar to the way human-scale  intelligence is beyond that of a goldfih.  (“A goldfish at the opera” is a metaphor for minds that can’t understand certain things.) How much more change could be achieved, with that intelligence?

Quote from Michael Vassar: “Humans are the stupidest a system could be, and still be generally intelligent”.

2. An intelligence explosion may be sudden

Different processes have different characteristic timescales.  Many things can happen on timescales very much faster than humans are used to.

Human culture has already brought about considerable change acceleration compared to biological evolution.

Silicon emulations of human brains could happen much faster than the human brain itself.  Adding extra hardware could multiply the effect again.  “Instant intelligence: just add hardware”.

The key question is: how fast could super-human AI arise?  It depends on aspects of how the brain emulation will work.  But we can’t rule out the possibility that a time will arise of a super-fast evolution of super-human AIs.

3. An uncontrolled intelligence explosion would kill us, and destroy practically everything we care about

Smart agencies can rearrange the environment to meet their goals.

Most rearrangements of the human environment would kill us.

Would super AIs want to keep humans around – eg as trading partners?  Alas, we are probably not the most useful trading partners that an AI could achieve by rerranging our parts!

Values can vary starkly across species.  Dung beetles think dung is yukky.

AIs are likely to want to scrap our “spaghetti code” of culture and create something much better in its place.

4. A controlled intelligence explosion could save us.  It’s difficult, but it’s worth the effort.

A well engineered AI could be permanently stable in its goals.

Be careful what you build an optimizer for – remember the parable of the sorcerer’s apprentice.

If we get the design right, the intelligence explosion could aid human goals instead of destroying them.

It’s a tall order!

20 August 2009

Registering for the Singularity

Filed under: Singularity — David Wood @ 10:28 pm

Today (20th August 2009) is the last day of the 20% discounted “early bird” price for registering for the 2009 Singularity Summit.

Singularity Summit

The summit is taking place at the Y on 92nd Street, New York, on the weekend of 3-4 October.  I’ve been unsure whether to attend: my work is very busy these days, and I’ve also got some important family commitments at around the same date.

However, some things are more important even than work.  There’s an argument that the Singularity could become the most important event in the near-to-medium term future.

I have a fair amount of sympathy for what Roko Mijic wrote recently (only partly with tongue-in-cheek):

Save the world by going to the Singularity Summit

Sometimes, you have to do unpleasant things in order to save the world, like stopping washing in order to save water or swapping your sports car for a Prius. But today, good readers, I give you an opportunity to do something that will contribute much more to the total expected utility in our common future light-cone than anything you have ever done before, whilst at the same time being a nice little holiday for your good selves, and an excellent opportunity to network with the movers and shakers of the world.

Yes, I am talking about the Singularity Summit 2009. Just look at the list of speakers. You would probably want to go just to listen to 10% of them. The summit will be held at the historic 92nd Street Y in New York City on October 3-4th.

Now, why will you attending this summit actually be even better for the world than you eating organic, not showering and driving a milkfloat for the rest of your life? Put simply, the possibility of smarter-than-human intelligence puts the entire planet solar system future light-cone at risk, and the singularity summits are the best way to get that message out. Adding more people to the summits generates prestige and interest, and this increases the rate at which something gets done about the problem.

I attended last year’s summit – and wrote up my impressions in my blog shortly afterwards.  This year’s summit is longer, and has an even more attractive list of top-notch speakers.

I’ve made up my mind.  I’ve booked a couple of days holiday from work, and have registered myself for the event.

Footnote: While browsing the summit site, I noticed the reading list, with its five recommended “introductory books”.  I’ve read and deeply appreciated three of them already, but the two others on the list are new to me.  Clicking through to Amazon.com for each of these last two books, I find myself in each case to be extremely interested by the book description:

If you can judge the quality of a conference (in part) by the quality of the recommended reading it highlights, this is another sign that the summit could be remarkable.

26 October 2008

The Singularity will go mainstream

Filed under: AGI, brain simulation, cryonics, Moore's Law, robots, Singularity — David Wood @ 1:49 pm

The concept of the coming technological singularity is going to enter mainstream discourse, and won’t go away. It will stop being something that can be dismissed as freaky or outlandish – something that is of interest only to marginal types and radical thinkers. Instead, it’s going to become something that every serious discussion of the future is going to have to contemplate. Writing a long-term business plan – or a long-term political agenda – without covering the singularity as one of the key topics, is increasingly going to become a sign of incompetence. We can imagine the responses, just a few years from now: “Your plan lacks a section on how the onset of the singularity is going to affect the take-up of your product. So I can’t take this proposal seriously”. And: “You’ve analysed five trends that will impact the future of our company, but you haven’t included the singularity – so everything else you say is suspect.”

In short, that’s the main realisation I reached by attending the Singularity Summit 2008 yesterday, in the Montgomery Theater in San Jose. As the day progressed, the evidence mounted up that the arguments in favour of the singularity will be increasingly persuasive, to wider and wider groups of people. Whether or not the singularity will actually happen is a slightly different question, but it’s no longer going to be possible to dismiss the concept of the singularity as irrelevant or implausible.

To back up my assertion, here are some of the highlights of what was a very full day:

Intel’s CTO and Corporate VP Justin Rattner spoke about “Countdown to Singularity: accelerating the pace of technological innovation at Intel”. He described a series of technological breakthroughs that would be likely to keep Moore’s Law operational until at least 2020, and he listed ideas for how it could be extended even beyond that. Rattner clearly has a deep understanding of the technology of semiconductors.

Dharmendra Modha, the manager of IBM’s cognitive computing lab at Almaden, explained how his lab had already utilised IBM super-computers to simulate an entire rat brain, with the simulation running at one tenth of real-time speed. He explained his reasons for expecting that his lab should be emable to simular an entire human brain, running at full speed, by 2018. This was possible as a result of the confluence of “three hard disruptive trends”:

  1. Neuroscience has matured
  2. Supercomputing meets the brain
  3. Nanotechnology meets the brain.

Cynthia Breazeal, Associate Professor of Media Arts and Sciences, MIT, drew spontaneous applause from the audience part-way through her talk, by showing a video of one of her socially responsive robots, Leonardo. The video showed Leonardo acting on beliefs about what various humans themselves believed (including beliefs that Leonardo could deduce were false). As Breazeal explained:

  • Up till recently, robotics has been about robots interacting with things (such as helping to manufacture cars)
  • In her work, robotics is about robots interacting with people in order to do things. Because humans are profoundly social, these robots will also have to be profoundly social – they are being designed to relate to humans in psychological terms. Hence the expressions of emotion on Leonardo’s face (and the other body language).

Marshall Brain, founder of “How Stuff Works”, also spoke about robots, and the trend for them to take over work tasks previously done by humans: MacDonalds waitresses, Wal-Mart shop assistants, vehicle drivers, construction workers, teachers…

James Miller, Associate Professor of Economics, Smith College, explicitly addressed the topic of how increasing belief in the likelihood of an oncoming singularity would change people’s investment decisions. Once people realise that, within (say) 20-30 years, the world could be transformed into something akin to paradise, with much greater lifespans and with abundant opportunities for extremely rich experiences, many will take much greater care than before to seek to live to reach that event. Interest in cryonics is likely to boom – since people can reason their bodies will only need to be vitrified for a short period of time, rather than having to trust their descendants to look after them for unknown hundreds of years. People will shun dangerous activities. They’ll also avoid locking money into long-term investments. And they’ll abstain from lengthy training courses (for example, to master a foreign language) if they believe that technology will shortly render as irrelevant all the sweat of that arduous learning.

Not every speaker was optimistic. Well-known author and science journalist John Horgan gave examples of where the progress of science and technology has been, not exponential, but flat:

  • nuclear fusion
  • ending infectious diseases
  • Richard Nixon’s “war on cancer”
  • gene therapy treatments
  • treating mental illness.

Horgan chided advocates of the singularity for their use of “rhetoric that is more appropriate to religion than science” – thereby risking damaging the standing of science at a time when science needs as much public support as it can get.

Ray Kurzweil, author of “The Singularity is Near”, responded to this by agreeing that not every technology progresses exponentially. However, those that become information sciences do experience significant growth. As medicine and health increasingly become digital information sciences, they are experiencing the same effect. Although in the past I’ve thought that Kurzweil sometimes overstates his case, on this occasion I thought he spoke with clarity and restraint, and with good evidence to back up his claims. He also presented updated versions of the graphs from his book. In the book, these graphs tended to stop around 2002. The slides Kurzweil showed at the summit continued up to 2007. It does appear that the rate of progress with information sciences is continuing to accelerate.

Earlier in the day, science fiction author and former maths and computing science professor Vernor Vinge gave his own explanation for this continuing progress:

Around the world, in many fields of industry, there are hundreds of thousands of people who are bringing the singularity closer, through the improvements they’re bringing about in their own fields of research – such as enhanced human-computer interfaces. They mainly don’t realise they are advancing the singularity – they’re not working to an agreed overriding vision for their work. Instead, they’re doing what they’re doing because of the enormous incremental economic plus of their work.

Under questioning by CNBC editor and reporter Bob Pisani, Vinge said that he sticks with the forecast he made many years ago, that the singularity would (“barring major human disasters”) happen by 2030. Vinge also noted that rapidly improving technology made the future very hard to predict with any certainty. “Classic trendline analysis is seriously doomed.” Planning should therefore focus on scenario evaluation rather than trend lines. Perhaps unsurprisingly, Vinge suggested that more forecasters should read science fiction, where scenarios can be developed and explored. (Since I’m midway through reading and enjoying Vinge’s own most recent novel, “Rainbows End” – set in 2025 – I agree!)

Director of Research at the Singularity Institute, Ben Goertzel, described a staircase of potential applications for the “OpenCog” system of “Artificial General Intelligence” he has been developing with co-workers (partially funded by Google, via the Google Summer of Code):

  • Teaching virtual dogs to dance
  • Teaching virtual parrots to talk
  • Nurturing virtual babies
  • Training virtual scientists that can read vast swathes of academic papers on your behalf
  • And more…

Founder and CSO of Innerspace Foundation, Pete Estep, gave perhaps one of the most thought-provoking presentations. The goal of Innerspace is, in short, to improve brain functioning. In more detail, “To establish bi-directional communication between the mind and external storage devices.” Quoting from the FAQ on the Innerspace site:

The IF [Innerspace Foundation] is dedicated to the improvement of human mind and memory. Even when the brain operates at peak performance learning is slow and arduous, and memory is limited and faulty. Unfortunately, other of the brain’s important functions are similarly challenged in our complex modern world. As we age, these already limited abilities and faculties erode and fail. The IF supports and accelerates basic and applied research and development for improvements in these areas. The long-term goal of the foundation is to establish relatively seamless two-way communication between people and external devices possessing clear data storage and computational advantages over the human brain.

Estep explained that he was a singularity agnostic: “it’s beyond my intellectual powers to decide if a singularity within 20 years is feasible”. However, he emphasised that it is evident to him that “the singularity might be near”. And this changes everything. Throughout history, and extending round the world even today, “there have been too many baseless fantasies and unreasonable rationalisations about the desirability of death”. The probable imminence of the singularity will help people to “escape” from these mind-binds – and to take a more vigorous and proactive stance towards planning and actually building desirable new technology. The singularity that Estep desires is one, not of super-powerful machine intelligence, but one of “AI+BCI: AI combined with a brain-computer interface”. This echoed words from robotics pioneer Hans Moravec that Vernor Vinge had reported earlier in the day:

“It’s not a singularity if you are riding the curve. And I intend to ride the curve.”

On the question of how to proactively improve the chances for beneficial technological development, Peter Diamandis spoke outstandingly well. He’s the founder of the X-Prize Foundation. I confess I hadn’t previously realised anything like the scale and the accomplishment of this Foundation. It was an eye-opener – as, indeed, was the whole day.

30 August 2008

Anticipating the singularity

Filed under: Moore's Law, Singularity — David Wood @ 10:05 am

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

The first time I read these words, a chill went down my spine. They were written in 1965 by IJ Good, a British statistician who had studied mathematics at Cambridge University pre-war, worked with Alan Turing and others in the highly secret code-breaking labs at Bletchley Park, and was involved in the creation of the Colossus computer (“the world’s first programmable, digital, electronic, computing device“).

The point where computers become better than humans at generating new computers – or (not quite the same thing) the point where AI becomes better than humans at generating new AI – is nowadays often called the singularity (or, sometimes, “the Technological Singularity“). To my mind, it’s a hugely important topic.

The name “Singularity” was proposed by maths professor and science fiction author Vernor Vinge, writing in 1993:

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended…

“When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale…

“From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control…

“I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown…”

If Vinge’s prediction is confirmed, the Singularity will happen within 30 years of 1993, namely by 2023. (He actually says, in his paper, “I’ll be surprised if this event occurs before 2005 or after 2030”.)

Of course, it’s notoriously hard to predict timescales for future technology. Some things turn out to take a lot longer than expected. AI is a prime example. Progress with AI has frequently turned out to be disappointing.

But not all technology predictions turn out bad. The best technology prediction of all time is probably that by Intel co-founder Gordon Moore. Coincidentally writing in 1965 (like IJ Good mentioned above), Moore noted:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer…”

For more than forty years, Moore’s Law has held roughly true – with (as revised by Moore himself) the doubling period taking around 24 months instead of 12 months. And it is this persistent growth in computing power that leads other writers – most famously, Ray Kurzweil – to continue to predict the reasonably imminent onset of the singularity. In his 2005 book “The Singularity Is Near: When Humans Transcend Biology“, Kurzweil picks the date 2045.

Intel’s present-day CTO, Justin Rattner, reviewed some of Kurzweil’s ideas in his keynote on the future of technology at the Intel Developer Forum in San Francisco on the 21st of August. The presentation was called “Crossing the chasm between humans and machines”.

To check what Justin said, you can view the official Intel video available here. There’s also a brief slide-by-slide commentary at the Singularity Hub site, as well as lots of other web coverage (eg here and here). Justin said that the singularity “might be only a few decades away”, and his talk includes examples of the technological breakthroughs that will plausibly be involved in this grander breakthrough.

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Another researcher who puts more emphasis on the potential breakthrough capabilities of the right kind of software, rather than hardware, is Ben Goertzel. Two years ago, he gave a talk entitled “Ten years to the Singularity if we really try.” One year ago, he gave an updated version, “Nine years to the Singularity if we really really try“. Ben suggests that the best place for new AIs to be developed is inside virtual worlds (such as Second Life). He might be right. It wouldn’t be the first time that significant software breakthroughs happened in arenas that mainstream society regards as peripheral or even objectionable.

Even bigger than the question of the plausible timescale of a future technological singularity, is the question of whether we can influence the outcome, to be positive for humanity rather than a disaster. That will be a key topic of the Singularity Summit 2008, which will be held in San Jose on the last Saturday of October.

The speakers at the summit include five of the people I’ve mentioned above:

(And there are 16 other named speakers – including many that I view as truly fascinating thinkers.)

The publicity material for the Singularity Summit 2008 describes the event as follows:

“The Singularity Summit gathers the smartest people around to explore the biggest ideas of our time. Learn where humanity is headed, meet the people leading the way, and leave inspired to create a better world.”

That’s a big claim, but it might just be right.

« Newer Posts

Blog at WordPress.com.