I recently looked at three questions about the feasibility of significant progress with AI. I’d like to continue that investigation, by looking at four more questions.
Q4: Given that all software is buggy, won’t this prevent the creation of any viable human-level AI?
Some people with a long involvement with software aren’t convinced that we can write software of sufficient quality that is of the complexity required for AI at the human-level (or beyond). It seems to them that complex software is too unreliable.
It’s true that the software we use on a day-by-day basis – whether on a desktop computer, on a mobile phone, or via a web server – tends to manifest nasty bugs from time to time. The more complex the system, the greater the likelihood of debilitating defects in the interactions between different subcomponents.
However, I don’t see this observation as ruling out the development of software that can manifest advanced AI. That’s for two reasons:
First, different software projects vary in their required quality level. Users of desktop software have become at least partially tolerant of defects in that software. As users, we complain, but it’s not the end of the world, and we generally find workarounds. As a result, manufacturers release software even though there’s still bugs in it. However, for mission-critical software, the quality level is pushed a lot higher. Yes, it’s harder to create software with high-reliability; but it can be done.
There are research projects underway to bring significantly higher quality software to desktop systems too. For example, here’s a description of a Microsoft Research project, which is (coincidentally) named “Singularity”:
Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.
Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.
Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.
There would be a certain irony if techniques from the Microsoft Singularity project were used to create a high-reliability AI system that in turn was involved in the Technological Singularity.
Second, even if software has defects, that doesn’t (by itself) prevent it from it from being intelligent. After all, the human brain itself has many defects – see my blogpost “The human mind as a flawed creation of nature“. Sometimes we think much better after a good night’s rest! The point is that the AI algorithms can include aspects of fault tolerance.
Q5: Given that we’re still far from understanding the human mind, aren’t we bound to be a long way from creating a viable human-level AI?
It’s often said that the human mind has deeply mysterious elements, such as consciousness, self-awareness, and free will. Since there’s little consensus about these aspects of the human mind, it’s said to be unlikely that a computer emulation of these features will arrive any time soon.
However, I disagree that we have no understanding of these aspects of the human mind. There’s a broad consensus among many philosophers and practitioners alike, that the main operation of the human mind is well explained by one or other variant of “physicalism”. As the Wikipedia article on the Philosophy of Mind states:
Most modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences…
Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues.
It’s true that there’s still scope for head-scratching debates on what philosopher David Chalmers calls “the hard problem of consciousness”, which has various formulations:
- “Why should physical processing give rise to a rich inner life at all?”
- “How is it that some organisms are subjects of experience?”
- “Why does awareness of sensory information exist at all?”
- “Why is there a subjective component to experience?”…
However, none of these questions, by themselves, should prevent the construction of a software system that will be able to process questions posed in natural human language, and to give high quality humanly-understandable answers. When that happens, the system will very probably seek to convince us that it has a similar inner conscious life to the one we have. As J Storr Halls says, we’ll probably believe it.
Q6: Is progress with narrow fields of AI really relevant to the problem of general AI?
I don’t consider the advances in machine translation over the past decade an advance in AI, I more consider them the result of brute force analysis on huge quantities of text. I wouldn’t consider a car that could safely drive itself along a motorway an advance in AI, rather it would be the integration of a number of existing technologies. I don’t really consider the improvement of an algorithm that does a specific thing (search, navigate, play chess) an advance in AI, since generally such an improvement cannot be used outside its narrow field of application.
My own view is that these advances do help, in the spirit of “divide and conquer”. I see the human mind as being made up of modules, rather than being some intractable whole. Improving ability in, for example, translating text, or in speech recognition, will help set the scene for eventual general AI.
It’s true that some aspects of the human mind will prove harder to emulate than others – such as the ability to notice and form new concepts. It may be the case that a theoretical breakthrough with this aspect will enable much faster overall progress, which will be able to leverage the work done on other modules.
Q7: With so many unknowns, isn’t all this speculation about AI futile?
It’s true that no one can predict, with any confidence, the date at which specific breakthrough advances in general AI are likely to happen. The best that someone can achieve is a distribution of different dates with different probabilities.
However, I don’t accept any argument that “there’s been no fundamental breakthroughs in the last sixty years, so there can’t possibly be any fundamental breakthroughs in (say) the next ten years”. That would be an invalid extrapolation.
“Aerial flight is one of that class of problems with which man can never cope.”
Newcomb was no fool: he had good reasons for his scepticism. As explained in the Wikipedia article about Newcomb:
In the October 22, 1903 issue of The Independent, Newcomb wrote that even if a man flew he could not stop. “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.” In addition, he had no concept of an airfoil. His “aeroplane” was an inclined “thin flat board.” He therefore concluded that it could never carry the weight of a man. Newcomb was specifically critical of the work of Samuel Pierpont Langley, who claimed that he could build a flying machine powered by a steam engine and whose initial efforts at flight were public failures…
Newcomb, apparently, was unaware of the Wright Brothers efforts whose [early] work was done in relative obscurity.
My point is that there does not seem to be any valid fundamental reason why the functioning of a human mind cannot be emulated via software; we may be just two or three good breakthroughs away from solving the remaining key challenges. With the close attention of many commercial interests, and with the accumulation of fragments of understanding, the chances improve of some of these breakthroughs happening sooner rather than later.