13 January 2010

AI: why, and when

Filed under: AGI, usability — David Wood @ 4:26 pm

Here’s a good question, raised by Paul Beardow:

One question that always rattles around in my mind is “why are we trying to recreate the human mind anyway?” We have billions of those already…

You can build something that appears to be human, but what is the point of that? Why chase an goal that doesn’t actually provide us with more than we have already?

Paul also says,

What I don’t want is AI in products so that they have their own personality, but a better understanding of my own wishes and desires in how that product should interact with me…

I personally also really don’t think that logic by itself can lead to a system that can evolve human-like imagination, feelings or personality, nor that the human mind can be reduced to being a machine. It has elementary parts, but the constant rebuilding and evolving of information doesn’t really follow any logical rules that can be programmed. The structure of the brain depends on what happens to us during the day and how we interpret it according to the situation. That defies logic most of the time and is constantly evolving and changing.

My answer: there are at least six reasons why people are pursing the goal of human-like AI.

1. Financial savings in automated systems

We’re already used to encountering automated service systems when using the phone (eg to book a cinema ticket: “I think you’re calling about Kingston upon Thames – say Yes or No”) or when navigating a web form or other user interface.  These systems provoke a mixture of feelings in the people who use them.  I often become frustrated, thinking it would be faster to speak directly to a “real human being”.  But on other occasions the automation works surprisingly well.

To widen the set of applicability of such systems, into more open-ended environments, will require engineering much more human-style “common sense” into these automated systems.  The research to accomplish this may cost lots of money, but once it’s working, it could enable considerable cost savings in service provision, as real human beings can be replaced in a system by smart pieces of silicon.

2. Improving game play

A related motivation is as follows: games designers want to program in human-level intelligence into characters in games, so that these artificial entities manifest many of the characteristics of real human participants.

By the way: electronic games are big money!  As the description of tonight’s RSA meeting “Why games are the 21st century’s most serious business” puts it:

Why should we be taking video games more seriously?

  • In 2008 Nintendo overtook Google to become the world’s most profitable company per employee.
  • The South Korean government will invest $200 billion into its video games industry over the next 4 years.
  • The trading of virtual goods within games is a global industry worth over $10 billion a year.
  • Gaming boasts the world’s fastest-growing advertising market.

3. Improved user experience with complex applications

As well as reducing cost, human-level AI can in principle improve the experience of users while interacting with complex applications.

Rather than users thinking, “No you stupid machine, why don’t you realise what I’m trying to do…”, they will be pleasantly surprised: “Ah yes, that was in fact what I was trying to accomplish – how did you manage to figure that out?”

It’s as Paul says:

What I … want … in products [is]… a better understanding of my own wishes and desires in how that product should interact with me

These are products with (let’s say it) much more “intelligence” than at present.  They observe what is happening, and can infer motivation.  I call this AI.

4. A test of scientific models of the human mind

A different kind of motivation for studying human-level AI is to find ways of testing our understanding of the human mind.

For example, I think that creativity can be achieved by machines, following logical rules.  (The basic rules are: generate lots of ideas, by whatever means, and then choose the ideas which have interesting consequences.)  But it is good to test this.  So, computers can be programmed to mimic the possible thought patterns of great composers, and we can decide whether the output is sufficiently “creative”.

(There’s already quite a lot of research into this.  For one starting point, see the EE Times article “Composer harnesses artificial intelligence to create music“.)

Similarly, it will be fascinating to hear the views of human-level AIs about (for example) the “Top 5 Unsolved Brain Mysteries“.

5. To find answers to really tough, important questions

The next motivation concerns the desire to create AIs with considerably greater than human-level AI.  Assuming that human-level AI is a point en route to that next destination, it’s therefore an indirect motivation for creating human-level AI.

The motivation here is to ask superAIs for help with really tough, difficult questions, such as:

  • What are the causes – and the cures – for different diseases?
  • Are there safe geoengineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What is the resolution of the conflict between theories of gravity and theories of all the other elementary forces?

6. To find ways of extending human life and expanding human experience

If the above answers aren’t sufficient, here’s one more, which attracts at least some researchers to the topic.

If some theories of AI are true, it might be possible to copy human awareness and consciouness from residence in a biological brain into residence inside silicon (or other new computing substrate).  If so, then it may open new options for continued human consciousness, without having to depend on the fraility of a decaying human body.

This may appear a very slender basis for hope for significantly longer human lifespan, but it can be argued that all the other bases for such hope are equally slender, if not even less plausible.

OK, that’s enough answers for “why”.  But about the question “when”?

In closing, let me quickly respond to a comment by Martin Budden:

I’m not saying that I don’t believe that there will be advances in AI. On the contrary I believe, in the course of time, there will be real and significant advances in “general AI”. I just don’t believe that these advances will be made in the next decade.

What I’d like, at this point, is to be able to indicate some kind of provisional roadmap (also known as “work breakdown”) for when stepping stones of progress towards AGI might happen.

Without such a roadmap, it’s too difficult to decide when larger steps of progress are likely.  It’s just a matter of different people appearing to have different intuitions.

To be clear, discussions of Moore’s Law aren’t sufficient to answer this question.  Progress with the raw power of hardware is one thing, but what we need here is an estimate of progress with software.

Sadly, I’m not aware of any such breakdown.  If anyone knows one, please speak up!

Footnote: I guess the best place to find such a roadmap will be at the forthcoming “Third Conference on Artificial General Intelligence” being held in Lugano, Switzerland, on 5-8 March this year.


  1. David,

    Here’s a brief outline of why I don’t think we’ll see significant advances in AI in the next decade. My reasons are not of “gut feel” or intuition, but are based on observation of how technologies have advanced in the past, and also my direct experiences in Psion and Symbian.

    The advance of technologies is mapped by a sequence of events. Typically those events include:
    i) significant theoretical discovery or technological breakthough (this might be called the conception event)
    ii) fist practical implementation of (i) (this might be called the birth event)
    iii) formation of company to exploit (i) & (ii)
    iv) commercial implementation of technology by company
    v) widespread adoption of technology

    These events are not only significant in themselves, but they also serve to attract people to work in the field and also to attract investment.

    This pattern has been followed by technologies as differing as the steam engine, the automobile, microelectronics and biotechnology. For the microelectronics we have had:

    1900 quantum hypothesis by Max Planck
    1925 Transistor patented
    1947 Transistor first demonstrated
    1956 Formation of Fairchild Semiconductor
    1959 Integrated circuit first demonstrated
    1971 First microprocessor
    1981 IBM PC introduced

    For biotechnology we have had:

    1865 publication of Experiments on Plant Hybridization by Gregor Mendel
    1930s convergence of biochemistry and genetics into molecular biology
    1953 discovery of the structure of DNA
    1961 cracking of the genetic code
    1972 invention of recombinant DNA
    1976 Formation of Genentech
    1982 synthetic human insulin approved by FDA

    One might say that the conception event for microelectronics was the demonstration of the transistor in 1947 and the birth event was the first integrated circuit in 1959. For biotechnology the conception event was the discovery of structure of DNA and the birth event the invention of recombinant DNA. Both these conception events were preceded by significant basic research, interest, and developments in subject area. This is generally true: breakthroughs rarely occur ‘out of the blue’, breakthroughs are built on what precedes them.

    Where are we with AI? Have we even had a conception event? I would argue not. The nearest thing I can think of is perhaps when Deep Blue beat Kasparov at chess in 1997. But as I said, I don’t really think chess algorithms count as AI, and there has been no birth event or commercialization to follow this event. There was no big attraction of researchers into the field after this event. I challenge you to draw an AI timeline that suggests that something important is about to happen.

    If we accept that AI is pre-conception, then a significant advance in AI in the next decade would be a conception event in the next decade. Now admittedly the timing of conception events is harder to predict than that of birth events or commercialization, and conception events can come out of left field. But actually there is normally quite a bit of indication that we are on the road to a conception event. Neither the 1947 demonstration of the transistor or the 1953 discovery of the structure of DNA were surprises.

    I see nothing to indicate that a breakthrough is imminent in AI. Indeed the fact that two technophiles such as yourself and myself are unaware of an AI roadmap (or work breakdown) is further evidence that such a breakthrough is not going to happen in the near future – breakthroughs tend to be preceded by an increasing awareness and excitement in the area that is, if not visible to the general public, certainly visible to the scientific and technical community that follows the subject area.

    Comment by Martin Budden — 14 January 2010 @ 6:45 pm

  2. I presume you guys have seen:


    Comment by Tim Tyler — 15 January 2010 @ 12:06 am

    • Thanks Tim,

      That’s is an interesting link. It’s a plan for a roadmap, rather than a roadmap, but at least it’s a start.

      Many of the links are red, that is the target pages still need to be written; and many of the pages that have been written, only have very minimal content. So there’s a lot more work needed. I’ll take a closer look.

      In some ways, the most interesting content is the picture in this page highlighting the topic of “semantic search” (“Web 3.0”) as a key step forwards anticipated for 2010-2020.

      Comment by David Wood — 15 January 2010 @ 4:25 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Blog at WordPress.com.

%d bloggers like this: