Here’s a good question, raised by Paul Beardow:
One question that always rattles around in my mind is “why are we trying to recreate the human mind anyway?” We have billions of those already…
You can build something that appears to be human, but what is the point of that? Why chase an goal that doesn’t actually provide us with more than we have already?
Paul also says,
What I don’t want is AI in products so that they have their own personality, but a better understanding of my own wishes and desires in how that product should interact with me…
I personally also really don’t think that logic by itself can lead to a system that can evolve human-like imagination, feelings or personality, nor that the human mind can be reduced to being a machine. It has elementary parts, but the constant rebuilding and evolving of information doesn’t really follow any logical rules that can be programmed. The structure of the brain depends on what happens to us during the day and how we interpret it according to the situation. That defies logic most of the time and is constantly evolving and changing.
My answer: there are at least six reasons why people are pursing the goal of human-like AI.
1. Financial savings in automated systems
We’re already used to encountering automated service systems when using the phone (eg to book a cinema ticket: “I think you’re calling about Kingston upon Thames – say Yes or No”) or when navigating a web form or other user interface. These systems provoke a mixture of feelings in the people who use them. I often become frustrated, thinking it would be faster to speak directly to a “real human being”. But on other occasions the automation works surprisingly well.
To widen the set of applicability of such systems, into more open-ended environments, will require engineering much more human-style “common sense” into these automated systems. The research to accomplish this may cost lots of money, but once it’s working, it could enable considerable cost savings in service provision, as real human beings can be replaced in a system by smart pieces of silicon.
2. Improving game play
A related motivation is as follows: games designers want to program in human-level intelligence into characters in games, so that these artificial entities manifest many of the characteristics of real human participants.
By the way: electronic games are big money! As the description of tonight’s RSA meeting “Why games are the 21st century’s most serious business” puts it:
Why should we be taking video games more seriously?
- In 2008 Nintendo overtook Google to become the world’s most profitable company per employee.
- The South Korean government will invest $200 billion into its video games industry over the next 4 years.
- The trading of virtual goods within games is a global industry worth over $10 billion a year.
- Gaming boasts the world’s fastest-growing advertising market.
3. Improved user experience with complex applications
As well as reducing cost, human-level AI can in principle improve the experience of users while interacting with complex applications.
Rather than users thinking, “No you stupid machine, why don’t you realise what I’m trying to do…”, they will be pleasantly surprised: “Ah yes, that was in fact what I was trying to accomplish – how did you manage to figure that out?”
It’s as Paul says:
What I … want … in products [is]… a better understanding of my own wishes and desires in how that product should interact with me
These are products with (let’s say it) much more “intelligence” than at present. They observe what is happening, and can infer motivation. I call this AI.
4. A test of scientific models of the human mind
A different kind of motivation for studying human-level AI is to find ways of testing our understanding of the human mind.
For example, I think that creativity can be achieved by machines, following logical rules. (The basic rules are: generate lots of ideas, by whatever means, and then choose the ideas which have interesting consequences.) But it is good to test this. So, computers can be programmed to mimic the possible thought patterns of great composers, and we can decide whether the output is sufficiently “creative”.
(There’s already quite a lot of research into this. For one starting point, see the EE Times article “Composer harnesses artificial intelligence to create music“.)
Similarly, it will be fascinating to hear the views of human-level AIs about (for example) the “Top 5 Unsolved Brain Mysteries“.
5. To find answers to really tough, important questions
The next motivation concerns the desire to create AIs with considerably greater than human-level AI. Assuming that human-level AI is a point en route to that next destination, it’s therefore an indirect motivation for creating human-level AI.
The motivation here is to ask superAIs for help with really tough, difficult questions, such as:
- What are the causes – and the cures – for different diseases?
- Are there safe geoengineering methods that will head off the threat of global warming, without nasty side effects?
- What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
- What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
- What is the resolution of the conflict between theories of gravity and theories of all the other elementary forces?
6. To find ways of extending human life and expanding human experience
If the above answers aren’t sufficient, here’s one more, which attracts at least some researchers to the topic.
If some theories of AI are true, it might be possible to copy human awareness and consciouness from residence in a biological brain into residence inside silicon (or other new computing substrate). If so, then it may open new options for continued human consciousness, without having to depend on the fraility of a decaying human body.
This may appear a very slender basis for hope for significantly longer human lifespan, but it can be argued that all the other bases for such hope are equally slender, if not even less plausible.
OK, that’s enough answers for “why”. But about the question “when”?
In closing, let me quickly respond to a comment by Martin Budden:
I’m not saying that I don’t believe that there will be advances in AI. On the contrary I believe, in the course of time, there will be real and significant advances in “general AI”. I just don’t believe that these advances will be made in the next decade.
What I’d like, at this point, is to be able to indicate some kind of provisional roadmap (also known as “work breakdown”) for when stepping stones of progress towards AGI might happen.
Without such a roadmap, it’s too difficult to decide when larger steps of progress are likely. It’s just a matter of different people appearing to have different intuitions.
To be clear, discussions of Moore’s Law aren’t sufficient to answer this question. Progress with the raw power of hardware is one thing, but what we need here is an estimate of progress with software.
Sadly, I’m not aware of any such breakdown. If anyone knows one, please speak up!
Footnote: I guess the best place to find such a roadmap will be at the forthcoming “Third Conference on Artificial General Intelligence” being held in Lugano, Switzerland, on 5-8 March this year.