Artificial Intelligence (AI) already does a lot to help me in my life:
- The real-time route calculation (and re-calculation) capabilities of my TomTom satnav system are extremely handy;
- The automated language translation functionality inside Google web-search, whilst far from perfect, often allows me to understand at least the gist of webpages written in languages other than English;
- The intelligent recommendation engine of Amazon frequently brings books to my attention that I am glad to investigate further.
On the other hand, the field of general AI has failed to progress as quickly as some of its supporters over the years had hoped. The Wikipedia article on the History of AI lists some striking examples of significant over-optimism among leading AI researchers:
- 1958, H. A. Simon and Allen Newell: “within ten years a digital computer will be the world’s chess champion” and “within ten years a digital computer will discover and prove an important new mathematical theorem.”
- 1965, H. A. Simon: “machines will be capable, within twenty years, of doing any work a man can do.”
- 1967, Marvin Minsky: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
- 1970, Marvin Minsky (in Life Magazine): “In from three to eight years we will have a machine with the general intelligence of an average human being.”
Prospects for fast progress with general AI remain controversial. As we gather more and more silicon power into smartphones and other computers, will this mean these devices become more and more intelligent? Or will they simply be fast rather than generally intelligent?
In this context, one interesting line of analysis is to consider a separate but related question: to what extent will it be possible to create a silicon emulation of the brain itself (rather than to focus on algorithms for intelligence)?
My friend Anders Sandberg, Neuroethics researcher at the Future of Humanity Institute, Oxford University, will be addressing this question in a presentation tomorrow afternoon (Saturday 22nd November) in Central London. The presentation is entitled “Emulating brains: silicon dreams or the next big thing?“
Anders describes his talk as follows:
The idea of creating a faithful copy of a human brain has been a popular philosophical thought experiment and science fiction plot for decades. How close are we to actually doing it, how could it be done, and what would the consequences be? This talk will trace trends in computing, neuroscience, lab automaton and microscopy to show how whole brain emulation could become feasible in the mid term future.
The talk is organised by the UKTA. Last weekend, at the Convergence08 “unconference” in Mountain View, California, Anders gave an earlier version of the same talk. George Dvorsky blogged the result:
Convergence08: Anders Sandberg on Whole Brain Emulation
The term ‘whole brain emulation’ sounds more scientific than it does science fiction like, which may bode well for its credibility as a genuine academic discipline and area for inquiry.
Sandberg presented his whole brain emulation roadmap which had a flowchart like quality to it — which he quipped must be scientific because it was filled with arrows.
Simulating memory could be very complex, possibly involving chemical transference in cells or drilling right down to the molecular level. We may even have to go down to the quantum level, but no neuroscientist that Anders knows takes that possibility seriously…
As Anders himself told me afterwards,
…interest was high but time limited – I got a lot of useful feedback and ideas for making the presentation better.
I’m expecting a fascinating discussion.
While its true that computer scientists, especially early computer scientists, have often failed to account for the fact that software does not advance as fast as hardware, there has in fact been significant advancement in the field. H.A. Simon’s predictions of computers as chess champions, and computers discovering and proving mathematical theorems have both happened, at this point – they just took longer than expected.
Perhaps the most important development that has occurred in AI from a software point of view is the understanding of P vs. NP, and the realization that many of the problems related to AI are NP complete (or at least NP hard). We have thus mathematically proven that certain aspects will always be strictly bound by the computer systems they run on. The human mind, being not only an incredibly fast computational machine, but also an incredibly parallel computational machine, has in many ways an inherent advantage to solving these problems, which is why it seems so superior to most AIs currently in existence.
Comment by taoist — 21 November 2008 @ 9:05 pm
Hi taoist,
Thanks for your comments. I actually agree with your suggestions. I’ve written a longer reply on your own blog here.
// dw2-0
Comment by David Wood — 21 November 2008 @ 11:04 pm
Actually, I’m currently taking a course in concurrency. Perhaps the most interesting thing we’ve discussed is that, thanks to current chip pipelining, modern processors need 100-150 independent processes per cycle if they want to stay busy. Pipelining, of course means that while one process is running step 1 of the cycle, another is on step 2, etc. Processors are so complicated nowadays that any process isn’t going to get through the processor for ~150 steps, meaning that there’s 149 other things that should be running that don’t have to wait for that process’ answer. So whether or not we (the programmers) like it, we’re going to have to learn to deal with concurrency.
Comment by taoist — 21 November 2008 @ 11:15 pm
David, you blog is just full of my favourite topics. 🙂
While I was doing some postgrad research in Cybernetic Intelligence I came up against exactly these issues and I think that human brain emulation in silicon is an extremely interesting avenue for research.
My one-time supervisor wanted to simulate the human brain using very acurate differential equations. And was starting work in that direction (computational neuroscience) but sadly the most powerful computing available to him at the time could only manage 2 neurons. With the average interconnect in the brain at 10,000 to 1 there was a long way to go!
Writing parallel algorithms is extremely difficult. I was working on one very clever massively parallel transformation invariant pattern recognition algorithm. In theory it would have been brilliant for computer vision but in practice there was a serial bottleneck in the proccessing of the results from the individual computations. There wasn’t any hardware architecture available, or capable of being built at reasonable cost that could do the job.
As such, I’m not so optimistic about the extra “intelligence” we’ll get from current processor architectures with multiple cores, or use of the other hardware like DSPs and GPUs in parallel via OpenCL. It all helps though!
Mark
Comment by m_p_wilcox — 22 November 2008 @ 12:10 pm