dw2

11 January 2010

AI, buggy software, and the Singularity

Filed under: AGI, Singularity — David Wood @ 12:00 am

I recently looked at three questions about the feasibility of significant progress with AI.  I’d like to continue that investigation, by looking at four more questions.

Q4: Given that all software is buggy, won’t this prevent the creation of any viable human-level AI?

Some people with a long involvement with software aren’t convinced that we can write software of sufficient quality that is of the complexity required for AI at the human-level (or beyond).  It seems to them that complex software is too unreliable.

It’s true that the software we use on a day-by-day basis – whether on a desktop computer, on a mobile phone, or via a web server – tends to manifest nasty bugs from time to time.  The more complex the system, the greater the likelihood of debilitating defects in the interactions between different subcomponents.

However, I don’t see this observation as ruling out the development of software that can manifest advanced AI.  That’s for two reasons:

First, different software projects vary in their required quality level.  Users of desktop software have become at least partially tolerant of defects in that software.  As users, we complain, but it’s not the end of the world, and we generally find workarounds.  As a result, manufacturers release software even though there’s still bugs in it.  However, for mission-critical software, the quality level is pushed a lot higher.  Yes, it’s harder to create software with high-reliability; but it can be done.

There are research projects underway to bring significantly higher quality software to desktop systems too.  For example, here’s a description of a Microsoft Research project, which is (coincidentally) named “Singularity”:

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

There would be a certain irony if techniques from the Microsoft Singularity project were used to create a high-reliability AI system that in turn was involved in the Technological Singularity.

Second, even if software has defects, that doesn’t (by itself) prevent it from it from being intelligent.  After all, the human brain itself has many defects – see my blogpost “The human mind as a flawed creation of nature“.  Sometimes we think much better after a good night’s rest!  The point is that the AI algorithms can include aspects of fault tolerance.

Q5: Given that we’re still far from understanding the human mind, aren’t we bound to be a long way from creating a viable human-level AI?

It’s often said that the human mind has deeply mysterious elements, such as consciousness, self-awareness, and free will.  Since there’s little consensus about these aspects of the human mind, it’s said to be unlikely that a computer emulation of these features will arrive any time soon.

However, I disagree that we have no understanding of these aspects of the human mind.  There’s a broad consensus among many philosophers and practitioners alike, that the main operation of the human mind is well explained by one or other variant of  “physicalism”.  As the Wikipedia article on the Philosophy of Mind states:

Most modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences…

Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues.

The book I mentioned previously, “Beyond AI” by J Storrs Hall, devotes several chapters to filling in aspects of this explanation.

It’s true that there’s still scope for head-scratching debates on what philosopher David Chalmers calls “the hard problem of consciousness”, which has various formulations:

  • “Why should physical processing give rise to a rich inner life at all?”
  • “How is it that some organisms are subjects of experience?”
  • “Why does awareness of sensory information exist at all?”
  • “Why is there a subjective component to experience?”…

However, none of these questions, by themselves, should prevent the construction of a software system that will be able to process questions posed in natural human language, and to give high quality humanly-understandable answers.  When that happens, the system will very probably seek to convince us that it has a similar inner conscious life to the one we have.  As J Storr Halls says, we’ll probably believe it.

Q6: Is progress with narrow fields of AI really relevant to the problem of general AI?

Martin Budden comments:

I don’t consider the advances in machine translation over the past decade an advance in AI, I more consider them the result of brute force analysis on huge quantities of text. I wouldn’t consider a car that could safely drive itself along a motorway an advance in AI, rather it would be the integration of a number of existing technologies. I don’t really consider the improvement of an algorithm that does a specific thing (search, navigate, play chess) an advance in AI, since generally such an improvement cannot be used outside its narrow field of application.

My own view is that these advances do help, in the spirit of “divide and conquer”.  I see the human mind as being made up of modules, rather than being some intractable whole.  Improving ability in, for example, translating text, or in speech recognition, will help set the scene for eventual general AI.

It’s true that some aspects of the human mind will prove harder to emulate than others – such as the ability to notice and form new concepts.  It may be the case that a theoretical breakthrough with this aspect will enable much faster overall progress, which will be able to leverage the work done on other modules.

Q7: With so many unknowns, isn’t all this speculation about AI futile?

It’s true that no one can predict, with any confidence, the date at which specific breakthrough advances in general AI are likely to happen.  The best that someone can achieve is a distribution of different dates with different probabilities.

However, I don’t accept any argument that “there’s been no fundamental breakthroughs in the last sixty years, so there can’t possibly be any fundamental breakthroughs in (say) the next ten years”.  That would be an invalid extrapolation.

That would be similar to the view expressed in 1903 by the distinguished astronomer and mathematician Simon Newcomb:

“Aerial flight is one of that class of problems with which man can never cope.”

Newcomb was no fool: he had good reasons for his scepticism.  As explained in the Wikipedia article about Newcomb:

In the October 22, 1903 issue of The Independent, Newcomb wrote that even if a man flew he could not stop. “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.” In addition, he had no concept of an airfoil. His “aeroplane” was an inclined “thin flat board.” He therefore concluded that it could never carry the weight of a man. Newcomb was specifically critical of the work of Samuel Pierpont Langley, who claimed that he could build a flying machine powered by a steam engine and whose initial efforts at flight were public failures…

Newcomb, apparently, was unaware of the Wright Brothers efforts whose [early] work was done in relative obscurity.

My point is that there does not seem to be any valid fundamental reason why the functioning of a human mind cannot be emulated via software; we may be just two or three good breakthroughs away from solving the remaining key challenges.  With the close attention of many commercial interests, and with the accumulation of fragments of understanding, the chances improve of some of these breakthroughs happening sooner rather than later.

Advertisements

4 Comments »

  1. It’s interesting that the focus of the Microsoft Singularity seems to be on containing errors, rather than eliminating them, but I may have misinterpreted the research.

    Whilst I would agree that the existence of bugs doesn’t make a piece of software totally unreliable, the potential for any application or system crash and emitting of incorrect output is still there, which is why safety critical systems have multiple redundancy in the hope that the same bug isn’t replicated across all systems, as in aircraft. I suppose containing a minor error and rebooting that aspect of the logic is a reasonable approach, it’s quite like some automotive systems that continually reboot to ensure a clean slate, but losing a connection in the AI brain could lead to erroneous output, even if the whole system didn’t crash. Would we think that an AI with such an error is trusted, or would we think that it had a logical mental disease?

    I guess one question that always rattles around in my mind is “why are we trying to recreate the human mind anyway?” We have billions of those already and they are very suited to the environment that they exist in. Why don’t we see more research on the type of AI that we’ll need for situations where a human is the wrong solution? Perhaps there is tons of this type of research and I miss it because it’s not my field, or perhaps it scares people to think that an alternative intelligence without human values might be dangerous.

    I personally also really don’t think that logic by itself can lead to a system that can evolve human-like imagination, feelings or personality, nor that the human mind can be reduced to being a machine. It has elementary parts, but the constant rebuilding and evolving of information doesn’t really follow any logical rules that can be programmed. The structure of the brain depends on what happens to us during the day and how we interpret it according to the situation. That defies logic most of the time and is constantly evolving and changing. For example, I don’t like some things I liked when I was younger, but some I do. Personal taste is not logical. I don’t fancy a cheese sandwich today, but tomorrow I might if I see one.

    You can build something that appears to be human, but what is the point of that? Why chase an goal that doesn’t actually provide us with more than we have already? Don’t we want an alternative intelligence that complements our own, not replacing it?

    I’m coming across as very negative on this, which wasn’t my intention. What I don’t want is AI in products so that they have their own personality, but a better understanding of my own wishes and desires in how that product should interact with me. I don’t think that needs AI to achieve that goal, but instead better design.

    Paul

    Comment by Paul Beardow — 11 January 2010 @ 2:22 pm

  2. David,

    I didn’t say that the advances in what you call “narrow AI” were not relevant to the problem of general AI, or these advances would not help with “general AI”, I merely said that I did not consider these to be advances in AI (I consider them more to be advances in computer science or engineering). To use your flying analogy: the advances in the technology of the internal combustion engine towards the end of the eighteenth century were not advances in aeronautics, but they were extremely useful for the development of the airplane. I’m making more than a semantic point here: I think it is unhelpful for the progress of AI to claim advances in related fields to be advances in AI.

    Note also that I’m not saying that I don’t believe that there will be advances in AI. On the contrary I believe, in the course of time, there will be real and significant advances in “general AI”. I just don’t believe that these advances will be made in the next decade.

    Comment by Martin Budden — 11 January 2010 @ 7:53 pm

  3. I thought the idea of AI is to learn, surely if we do it right the AI system itself can ‘learn’ its own mistakes and work around them. Wouldn’t that be a fun idea, self correcting code 🙂

    Comment by Dave — 11 April 2010 @ 1:56 pm

    • Dave,

      Agreed, I do expect AIs to become able to review the operation of their own source code, to notice bugs in it, and to fix them.

      It will be slightly similar to the many “self reprogramming” methods often employed by humans.

      Of course, once an AI improves itself by fixing bug A, it will be more likely to be able to detect and fix bug B too (where bug B is harder than bug A). And then, in short measure, bugs C, D, E… where each on the list requires even more intelligence to fix. The result could be a huge jump in the intelligence of the AI in just a short period of time (one afternoon?)

      Comment by David Wood — 11 April 2010 @ 2:09 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: