dw2

1 March 2021

The imminence of artificial consciousness

Filed under: AGI, books, brain simulation, London Futurists — Tags: , , — David Wood @ 10:26 am

I’ve changed my mind about consciousness.

I used to think that, of the two great problems about artificial minds – namely, achieving artificial general intelligence, and achieving artificial consciousness – progress toward the former would be faster than progress toward the latter.

After all, progress in understanding consciousness had seemed particularly slow, whereas enormous numbers of researchers in both academia and industry have been attaining breakthrough after breakthrough with new algorithms in artificial reasoning.

Over the decades, I’d read a number of books by Daniel Dennett and other philosophers who claimed to have shown that consciousness was basically already understood. There’s nothing spectacularly magical or esoteric about consciousness, Dennett maintained. What’s more, we must beware being misled by our own introspective understanding of our consciousness. That inner introspection is subject to distortions – perceptual illusions, akin to the visual illusions that often mislead us about what we think our eyes are seeing.

But I’d found myself at best semi-convinced by such accounts. I felt that, despite the clever analyses in such accounts, there was surely more to the story.

The most famous expression of the idea that consciousness still defied a proper understanding is the formulation by David Chalmers. This is from his watershed 1995 essay “Facing Up to the Problem of Consciousness”:

The really hard problem of consciousness is the problem of experience. When we think and perceive, there is a whir of information-processing, but there is also a subjective aspect… There is something it is like to be a conscious organism. This subjective aspect is experience.

When we see, for example, we experience visual sensations: the felt quality of redness, the experience of dark and light, the quality of depth in a visual field. Other experiences go along with perception in different modalities: the sound of a clarinet, the smell of mothballs. Then there are bodily sensations, from pains to orgasms; mental images that are conjured up internally; the felt quality of emotion, and the experience of a stream of conscious thought. What unites all of these states is that there is something it is like to be in them. All of them are states of experience.

It is undeniable that some organisms are subjects of experience. But the question of how it is that these systems are subjects of experience is perplexing. Why is it that when our cognitive systems engage in visual and auditory information-processing, we have visual or auditory experience: the quality of deep blue, the sensation of middle C? How can we explain why there is something it is like to entertain a mental image, or to experience an emotion?

It is widely agreed that experience arises from a physical basis, but we have no good explanation of why and how it so arises. Why should physical processing give rise to a rich inner life at all? It seems objectively unreasonable that it should, and yet it does.

However, as Wikipedia notes,

The existence of a “hard problem” is controversial. It has been accepted by philosophers of mind such as Joseph Levine, Colin McGinn, and Ned Block and cognitive neuroscientists such as Francisco Varela, Giulio Tononi, and Christof Koch. However, its existence is disputed by philosophers of mind such as Daniel Dennett, Massimo Pigliucci, Thomas Metzinger, Patricia Churchland, and Keith Frankish, and cognitive neuroscientists such as Stanislas Dehaene, Bernard Baars, Anil Seth and Antonio Damasio.

With so many smart people apparently unable to agree, what hope is there for a layperson to have any confidence in an answering the question, is consciousness already explained in principle, or do we need some fundamentally new insights?

It’s tempting to say, therefore, that the question should be left to one side. Instead of squandering energy spinning circles of ideas with little prospect of real progress, it would be better to concentrate on numerous practical questions: vaccines for pandemics, climate change, taking the sting out of psychological malware, protecting democracy against latent totalitarianism, and so on.

That practical orientation is the one that I have tried to follow most of the time. But there are four reasons, nevertheless, to keep returning to the question of understanding consciousness. A better understanding of consciousness might:

  1. Help provide therapists and counsellors with new methods to address the growing crisis of mental ill-health
  2. Change our attitudes towards the suffering we inflict, as a society, upon farm animals, fish, and other creatures
  3. Provide confidence on whether copying of memories and other patterns of brain activity, into some kind of silicon storage, could result at some future date in the resurrection of our consciousness – or whether any such reanimation would, instead, be “only a copy” of us
  4. Guide the ways in which systems of artificial intelligence are being created.

On that last point, consider the question whether AI systems will somehow automatically become conscious, as they gain in computational ability. Most AI researchers have been sceptical on that score. Google Maps is not conscious, despite all the profoundly clever things that it can do. Neither is your smartphone. As for the Internet as a whole, opinions are a bit more mixed, but again, the general consensus is that all the electronic processing happening on the Internet is devoid of the kind of subjective inner experience described by David Chalmers.

Yes, lots of software has elements of being self-aware. Such software contains models of itself. But it’s generally thought (and I agree, for what it’s worth) that such internal modelling is far short of subjective inner experience.

One prospect this raises is the dark possibility that humans might be superseded by AIs that are considerably more intelligent than us, but that such AIs would have “no-one at home”, that is, no inner consciousness. In that case, a universe with AIs instead of humans might have much more information processing, but be devoid of conscious feelings. Mega oops.

The discussion at this point is sometimes led astray by the popular notion that any threat from superintelligent AIs to human existence is predicated on these AIs “waking up” or become conscious. In that popular narrative, any such waking up might give an AI an additional incentive to preserve itself. Such an AI might adopt destructive human “alpha male” combative attitudes. But as I say, that’s a faulty line of reasoning. AIs might well be motivated to preserve themselves without ever gaining any consciousness. (Look up the concept of “basic AI drives” by Steve Omohundro.) Indeed, a cruise missile that locks onto a target poses a threat to that target, not because the missile is somehow conscious, but because it has enough intelligence to navigate to its target and explode on arrival.

Indeed, AIs can pose threats to people’s employment, without these AIs gaining consciousness. They can simulate emotions without having real internal emotions. They can create artistic masterpieces, using techniques such as GANs (Generative Adversarial Networks), without having any real psychological appreciation of the beauty of these works of art.

For these reasons, I’ve generally urged people to set aside the question of machine consciousness, and to focus instead on the question of machine intelligence. (For example, I presented that argument in Chapter 9 of my book Sustainable Superabundance.) The latter is tangible and poses increasing threats (and opportunities), whereas the former is a discussion that never seems to get off the ground.

But, as I mentioned at the start, I’ve changed my mind. I now think it’s possible we could have machines with synthetic consciousness well before we have machines with general intelligence.

What’s changed my mind is the book by Professor Mark Solms, The Hidden Spring: A Journey to the Source of Consciousness.

Solms is director of neuropsychology in the Neuroscience Institute of the University of Cape Town, honorary lecturer in neurosurgery at the Royal London Hospital School of Medicine, and an honorary fellow of the American College of Psychiatrists. He has spent his entire career investigating the mysteries of consciousness. He achieved renown within his profession for identifying the brain mechanisms of dreaming and for bringing psychoanalytic insights into modern neuroscience. And now his book The Hidden Spring is bringing him renown far beyond his profession. Here’s a selection of the praise it has received:

  • A remarkably bold fusion of ideas from psychoanalysis, psychology, and the frontiers of theoretical neuroscience, that takes aim at the biggest question there is. Solms will challenge your most basic beliefs.
    Matthew Cobb, author of The Idea of the Brain: The Past and Future of Neuroscience
  • At last the emperor has found some clothes! For decades, consciousness has been perceived as an epiphenomenon, little more than an illusion that can’t really make things happen. Solms takes a thrilling new approach to the problem, grounded in modern neurobiology but finding meaning in older ideas going back to Freud. This is an exciting book.
    Nick Lane, author of The Vital Question
  • To say this work is encyclopaedic is to diminish its poetic, psychological and theoretical achievement. This is required reading.
    Susie Orbach, author of In Therapy
  • Intriguing…There is plenty to provoke and fascinate along the way.
    Anil Seth, Times Higher Education
  • Solms’s efforts… have been truly pioneering. This unification is clearly the direction for the future.
    Eric Kandel, Nobel laureate for Physiology and Medicine
  • This treatment of consciousness and artificial sentience should be taken very seriously.
    Karl Friston, scientific director, Wellcome Trust Centre for Neuroimaging
  • Solms’s vital work has never ignored the lived, felt experience of human beings. His ideas look a lot like the future to me.
    Siri Hustvedt, author of The Blazing World
  • Nobody bewitched by these mysteries [of consciousness] can afford to ignore the solution proposed by Mark Solms… Fascinating, wide-ranging and heartfelt.
    Oliver Burkeman, Guardian
  • This is truly a remarkable book. It changes everything.
    Brian Eno

At times, I had to concentrate hard while listening to this book, rewinding the playback multiple times. That’s because the ideas kept sparking new lines of thought in my mind, which ran off in different directions as the narration continued. And although Solms explains his ideas in an engaging manner, I wanted to think through the deeper connections with the various fields that form part of the discussion – including psychoanalysis (Freud features heavily), thermodynamics (Helmholtz, Gibbs, and Friston), evolution, animal instincts, dreams, Bayesian statistics, perceptual illusions, and the philosophy of science.

Alongside the theoretical sections, the book contains plenty of case studies – from Solms’ own patients, and from other clinicians over the decades (actually centuries) – that illuminate the points being made. These studies involve people – or animals – with damage to parts of their brains. The unusual ways in which these subjects behave – and the unusual ways in which they express themselves – provide insight on how consciousness operates. Particularly remarkable are the children born with hydranencephaly – that is, without a cerebral cortex – but who nevertheless appear to experience feelings.

Having spent two weeks making my way through the first three quarters of the book, I took the time yesterday (Sunday) to listen to the final quarter, where there were several climaxes following on top of each other – addressing at length the “Hard Problem” ideas of David Chalmers, and the possibility of artificial consciousness.

It’s challenging to summarise such a rich set of ideas in just a few paragraphs, but here are some components:

  • To understand consciousness, the subcortical brain stem (an ancient part of our anatomy) is at least as important as the cognitive architecture of the cortex
  • To understand consciousness, we need to pay attention to feelings as much as to memories and thought processing
  • Likewise, the chemistry of long-range neuromodulators is at least as important as the chemistry of short-range neurotransmitters
  • Consciousness arises from particular kinds of homeostatic systems which are separated from their environment by a partially permeable boundary: a structure known as a “Markov blanket”
  • These systems need to take actions to preserve their own existence, including creating an internal model of their external environment, monitoring differences between incoming sensory signals and what their model predicted these signals would be, and making adjustments so as to prevent these differences from escalating
  • Whereas a great deal of internal processing and decision-making can happen automatically, without conscious thought, some challenges transcend previous programming, and demand greater attention

In short, consciousness arises from particular forms of information processing. (Solms provides good reasons to reject the idea that there is a basic consiciousness latent in all information, or, indeed, in all matter.) Whilst more work requires to be done to pin down the exact circumstances in which consciousness arises, this project is looking much more promising now, than it did just a few years ago.

This is no idle metaphysics. The ideas can in principle be tested by creating artificial systems that involve particular kinds of Markov blankets, uncertain environments that pose existential threats to the system, diverse categorical needs (akin to the multiple different needs of biologically conscious organisms), and layered feedback loops. Solms sets out a three-stage process whereby such systems could be built and evolved, in a relatively short number of years.

But wait. All kinds of questions arise. Perhaps the most pressing one is this: If such systems can be built, should we build them?

That “should we” question gets a lot of attention in the closing sections of the book. We might end up with AIs that are conscious slaves, in ways that we don’t have to worry about for our existing AIs. We might create AIs that feel pain beyond that which any previous conscious being has ever experienced it. Equally, we might create AIs that behave very differently from those without consciousness – AIs that are more unpredictable, more adaptable, more resourceful, more creative – and more dangerous.

Solms is doubtful about any global moratorium on such experiments. Now that the ideas are out of the bag, so to speak, there will be many people – in both academia and industry – who are motivated to do additional research in this field.

What next? That’s a question that I’ll be exploring this Saturday, 6th March, when Mark Solms will be speaking to London Futurists. The title of his presentation will be “Towards an artificial consciousness”.

For more details of what I expect will be a fascinating conversation – and to register to take part in the live question and answer portion of the event – follow the links here.

24 August 2012

Duplication stuplication

Filed under: Accenture, Android, brain simulation, Connectivity, cryonics, death, futurist, Symbian — David Wood @ 12:04 am

I had a mixture of feelings when I looked at the display of the Agenda application on my Samsung Note smartphone:

On the face of things, I was going to be very busy at 09:00 that morning – I had five simultaneous meetings to attend!

But they were all the same meeting. And in fact I had already cancelled that meeting. Or, at least, I had tried to cancel that meeting. I had tried to cancel it several times.

The meeting in question – “TPR” – the Technology Planning Review that I chair from time to time inside Accenture Mobility – is a meeting I had organised, on a regularly repeating basis. This particular entry was set to repeat every four weeks. Some time earlier, I had decided that this meeting no longer needed to happen. From my Outlook Calendar on my laptop, I had pressed the button that, ordinarily, would have sent cancellation messages to all attendees. At first, things seemed to go well – the meeting disappeared from sight in my Outlook calendar.

However, a couple of hours later, I noticed it was still there, or had re-appeared. Without giving the matter much thought, I imagined I must have experienced some finger problem, and I repeated the cancellation process.

Some time later, I glanced at my engagements for that day on my smartphone – and my heart sank. The entry was shown no less than nine times, stacked on top of each other. One, two, three, four, five, six, seven, eight, nine. Woops.

(The screenshot above only shows the entry appearing five times. That’s because I deleted four of the occurrences before I had the presence of mind to record the image for posterity.)

To tell the truth, I also had a wry, knowing smile. It was a kind of “aha, this confirms that synchronising agendas can be hard” smile. “Thank goodness there are duplicate entry bugs on Android phones too!”

That uncharitable thought had its roots in many moments of personal embarrassment over the years, whenever I saw examples of duplicated entries on phones running Symbian OS. The software that synchronised agenda information across more than one device – for example, between a PC and a connected Symbian smartphone – got into a confused state on too many occasions. Symbian software had many strengths, but laser accuracy of agenda entry synchronisation was not one of them.

But in this case, there was no Symbian software involved. The bug – whatever it was – could not be blamed on any software (such as Symbian OS) for which I personally had any responsibility.

Nevertheless, I still felt bad. The meeting entry that I had created, and had broadcast to a wide number of my Accenture Mobility colleagues, was evidently misbehaving on their calendars. I had to answer several emails and instant messaging queries: Is this meeting happening or not?

Worse, the same problem applied to every one of the repeating entries in the series. Entries show up in the calendars of lots of my Accenture colleagues, once every four weeks, encouraging them to show up for a meeting that is no longer taking place.

Whether I tried to cancel all the entries in the series, or just an individual entry, the result was the same. Whether I cancelled them from my smartphone calendar or from Outlook on my laptop, the result was the same. Namely, the entry disappeared for a while, but re-appeared a few hours later.

Today I tried again. Looking ahead to the meeting slot planned for 30th August, I thought I would be smart, and deleted the entry, both from my smartphone calendar, and from Outlook on my laptop, within a few seconds of each other, just in case a defective synchronisation between the two devices was to blame. You guessed it: the result was the same. (Though this time it was about three hours before the entry re-appeared, and I was starting to think I had cracked it after all.

So what’s going on? I found a clue in an unexpected place – the email folder of Deleted Items in Microsoft Outlook. This showed an email that was unread, but which had somehow moved directly into the Deleted Items folder, without me seeing it.

The entry read as follows:

Microsoft Outlook on behalf of <one of the meeting participants>

One or more problems with this meeting were detected by Exchange 2010.

This meeting is missing from your calendar. You’re the meeting organizer and some attendees still have the meeting on their calendar.

And just as Outlook had silently moved this email into the Deleted Items folder, without drawing my attention to it, Outlook had also silently reinstated the meeting, in my calendar and (it seems) in everyone else’s calendar, without asking me whether or not that was a good idea. Too darned clever.

I still don’t know how to fix this problem. I half-suspect there’s been some kind of database corruption problem – perhaps caused by Microsoft Exchange being caught out by:

  • Very heavy usage from large numbers of employees (100s of 1000s) within one company
  • Changes in policy for how online meetings are defined and operated, in between when the meeting was first created, and when it was due to take place
  • The weird weather we’ve experienced in London this summer
  • Some other who-knows-what strange environmental race conditions.

However, I hear of tales of other colleagues experiencing similar issues with repeating entries they’ve created, which provides more evidence of a concrete software defect, rather than a random act of the universe.

Other synchronisation problems

As I said, when I reflected on what was happening, I had a wry smile. Synchronisation of complex data between different replications is hard, when the data could be altered in more than one place at the same time.

Indeed, it’s sometimes a really hard problem for software to know when to merge apparent duplicates together, and when to leave them separated. I’m reminded of that fact whenever I do a search in the Contacts application on my Android phone. It often lists multiple entries corresponding to a single person. Some of these entries show pictures, but others don’t. At first, I wasn’t sure why there were multiple entries. But closer inspection showed that some details came from my Google mail archives, some from my collection of LinkedIn connections, some from my set of Facebook Friends, and so on. Should the smartphone simply automatically merge all these instances together? Not necessarily. It’s sometimes not clear whether the entries refer to the same person, or to two people with similar names.

If that’s a comparatively simple example, let me finish with an example that takes things further afield. It’s not about the duplication and potential re-integration of agenda entries. Nor is it about the duplication and potential re-integration of pieces of contacts info. It’s about the duplication and potential re-integration of human minds.

Yes: the duplication and potential re-integration of human minds.

That’s a topic that came up in a presentation in the World Future 2012 conference I attended in Toronto at the end of July.

The talk was given by John M. Smart, founder and president of the Acceleration Studies Foundation. The conference brochure described the talk as follows:

Chemical Brain Preservation: How to Live “Forever”

About 57 million unique and precious human beings die every year, or 155,000 people every day. The memories and identities in their brains are permanently lost at present, but may not be in the near future.

Chemical brain preservation is a technique that many scientists believe may inexpensively preserve our memories and identity when we die, eventually for less than $10,000 per person in the developed world, and less than $3,000 per person in the developing world. Preserved brains can be stored at room temperature in cemeteries, in contract storage, or even in private homes. Our organization, the Brain Preservation Foundation (brainpreservation.org), is offering a $100,000 prize to the first scientific team to demonstrate that the entire synaptic connectivity of mammalian brains, where neuroscientists believe our memories and identities reside, can be perfectly preserved using these low-cost chemical techniques.

There is growing evidence that chemically preserved brains can be “read” in the future, like a computer hard drive, so that memories, and even the complete identities of the preserved individuals can be restored, using low-cost automated techniques. Amazingly, given the accelerating rate of technological advance, a person whose brain is preserved in 2020 might “return” to the world, most likely in a computer form, as early as 2060, while their loved ones and some of their friends are still alive…

Note: this idea is different from cryonics. Cryonics also involves attempted brain preservation, at an ultra-low temperature, but with a view to re-animating the brain some time in the future, once medical science has advanced enough to repair whatever damage brought the person to the point of death. (Anyone serious about finding out more about cryonics might be interested in attending the forthcoming Alcor-40 conference, in October; this conference marks the 40th anniversary of the founding of the most famous cryonics organisation.)

In contrast, the Brain Preservation Foundation talks about reading the contents of a brain (in the future), and copying that information into a computer, where the person can be re-started. The process of reading the info from the brain is very likely to destroy the brain itself.

There are several very large questions here:

  • Could the data of a brain be read with sufficient level of detail, and recreated in another substrate?
  • Once recreated, could that copy of the brain be coaxed into consciousness?
  • Even if that brain would appear to remember all my experiences, and assert that it is me, would it be any less of a preservation of me than in the case of cryonics itself (assuming that cryonics re-animation could work)?
  • Given a choice between the two means of potential resurrection, which should people choose?

The first two of these questions are scientific, whereas the latter two appear to veer into metaphysics. But for what it’s worth, I would choose the cryonics option.

My concern about the whole program of “brain copying” is triggered when I wonder:

  • What happens if multiple copies of a mind are created? After all, once one copy exists in software, it’s probably just as easy to create many copies.
  • If these copies all get re-animated, are they all the same person?
  • Imagine how one of these copies would feel if told “We’re going to switch you off now, since you are only a redundant back-up; don’t worry, the other copies will be you too”

During the discussion in the meeting in Toronto, John Smart talked about the option to re-integrate different copies of a single mind, resulting in a whole that is somehow better than each individual copy. It sounds an attractive idea in principle. But when I consider the practical difficulties in re-integrating duplicated agenda entries, a wry, uneasy smile comes to my lips. Re-integrating complex minds will be a zillion times more complicated. That project could be the most interesting software development project ever.

31 January 2010

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

21 November 2008

Emulating the human brain

Filed under: AGI, brain simulation, UKTA — David Wood @ 7:00 pm

Artificial Intelligence (AI) already does a lot to help me in my life:

  • The real-time route calculation (and re-calculation) capabilities of my TomTom satnav system are extremely handy;
  • The automated language translation functionality inside Google web-search, whilst far from perfect, often allows me to understand at least the gist of webpages written in languages other than English;
  • The intelligent recommendation engine of Amazon frequently brings books to my attention that I am glad to investigate further.

On the other hand, the field of general AI has failed to progress as quickly as some of its supporters over the years had hoped. The Wikipedia article on the History of AI lists some striking examples of significant over-optimism among leading AI researchers:

  • 1958, H. A. Simon and Allen Newell: “within ten years a digital computer will be the world’s chess champion” and “within ten years a digital computer will discover and prove an important new mathematical theorem.”
  • 1965, H. A. Simon: “machines will be capable, within twenty years, of doing any work a man can do.”
  • 1967, Marvin Minsky: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
  • 1970, Marvin Minsky (in Life Magazine): “In from three to eight years we will have a machine with the general intelligence of an average human being.”

Prospects for fast progress with general AI remain controversial. As we gather more and more silicon power into smartphones and other computers, will this mean these devices become more and more intelligent? Or will they simply be fast rather than generally intelligent?

In this context, one interesting line of analysis is to consider a separate but related question: to what extent will it be possible to create a silicon emulation of the brain itself (rather than to focus on algorithms for intelligence)?

My friend Anders Sandberg, Neuroethics researcher at the Future of Humanity Institute, Oxford University, will be addressing this question in a presentation tomorrow afternoon (Saturday 22nd November) in Central London. The presentation is entitled “Emulating brains: silicon dreams or the next big thing?

Anders describes his talk as follows:

The idea of creating a faithful copy of a human brain has been a popular philosophical thought experiment and science fiction plot for decades. How close are we to actually doing it, how could it be done, and what would the consequences be? This talk will trace trends in computing, neuroscience, lab automaton and microscopy to show how whole brain emulation could become feasible in the mid term future.

The talk is organised by the UKTA. Last weekend, at the Convergence08 “unconference” in Mountain View, California, Anders gave an earlier version of the same talk. George Dvorsky blogged the result:

Convergence08: Anders Sandberg on Whole Brain Emulation

The term ‘whole brain emulation’ sounds more scientific than it does science fiction like, which may bode well for its credibility as a genuine academic discipline and area for inquiry.

Sandberg presented his whole brain emulation roadmap which had a flowchart like quality to it — which he quipped must be scientific because it was filled with arrows.

Simulating memory could be very complex, possibly involving chemical transference in cells or drilling right down to the molecular level. We may even have to go down to the quantum level, but no neuroscientist that Anders knows takes that possibility seriously…

As Anders himself told me afterwards,

…interest was high but time limited – I got a lot of useful feedback and ideas for making the presentation better.

I’m expecting a fascinating discussion.

26 October 2008

The Singularity will go mainstream

Filed under: AGI, brain simulation, cryonics, Moore's Law, robots, Singularity — David Wood @ 1:49 pm

The concept of the coming technological singularity is going to enter mainstream discourse, and won’t go away. It will stop being something that can be dismissed as freaky or outlandish – something that is of interest only to marginal types and radical thinkers. Instead, it’s going to become something that every serious discussion of the future is going to have to contemplate. Writing a long-term business plan – or a long-term political agenda – without covering the singularity as one of the key topics, is increasingly going to become a sign of incompetence. We can imagine the responses, just a few years from now: “Your plan lacks a section on how the onset of the singularity is going to affect the take-up of your product. So I can’t take this proposal seriously”. And: “You’ve analysed five trends that will impact the future of our company, but you haven’t included the singularity – so everything else you say is suspect.”

In short, that’s the main realisation I reached by attending the Singularity Summit 2008 yesterday, in the Montgomery Theater in San Jose. As the day progressed, the evidence mounted up that the arguments in favour of the singularity will be increasingly persuasive, to wider and wider groups of people. Whether or not the singularity will actually happen is a slightly different question, but it’s no longer going to be possible to dismiss the concept of the singularity as irrelevant or implausible.

To back up my assertion, here are some of the highlights of what was a very full day:

Intel’s CTO and Corporate VP Justin Rattner spoke about “Countdown to Singularity: accelerating the pace of technological innovation at Intel”. He described a series of technological breakthroughs that would be likely to keep Moore’s Law operational until at least 2020, and he listed ideas for how it could be extended even beyond that. Rattner clearly has a deep understanding of the technology of semiconductors.

Dharmendra Modha, the manager of IBM’s cognitive computing lab at Almaden, explained how his lab had already utilised IBM super-computers to simulate an entire rat brain, with the simulation running at one tenth of real-time speed. He explained his reasons for expecting that his lab should be emable to simular an entire human brain, running at full speed, by 2018. This was possible as a result of the confluence of “three hard disruptive trends”:

  1. Neuroscience has matured
  2. Supercomputing meets the brain
  3. Nanotechnology meets the brain.

Cynthia Breazeal, Associate Professor of Media Arts and Sciences, MIT, drew spontaneous applause from the audience part-way through her talk, by showing a video of one of her socially responsive robots, Leonardo. The video showed Leonardo acting on beliefs about what various humans themselves believed (including beliefs that Leonardo could deduce were false). As Breazeal explained:

  • Up till recently, robotics has been about robots interacting with things (such as helping to manufacture cars)
  • In her work, robotics is about robots interacting with people in order to do things. Because humans are profoundly social, these robots will also have to be profoundly social – they are being designed to relate to humans in psychological terms. Hence the expressions of emotion on Leonardo’s face (and the other body language).

Marshall Brain, founder of “How Stuff Works”, also spoke about robots, and the trend for them to take over work tasks previously done by humans: MacDonalds waitresses, Wal-Mart shop assistants, vehicle drivers, construction workers, teachers…

James Miller, Associate Professor of Economics, Smith College, explicitly addressed the topic of how increasing belief in the likelihood of an oncoming singularity would change people’s investment decisions. Once people realise that, within (say) 20-30 years, the world could be transformed into something akin to paradise, with much greater lifespans and with abundant opportunities for extremely rich experiences, many will take much greater care than before to seek to live to reach that event. Interest in cryonics is likely to boom – since people can reason their bodies will only need to be vitrified for a short period of time, rather than having to trust their descendants to look after them for unknown hundreds of years. People will shun dangerous activities. They’ll also avoid locking money into long-term investments. And they’ll abstain from lengthy training courses (for example, to master a foreign language) if they believe that technology will shortly render as irrelevant all the sweat of that arduous learning.

Not every speaker was optimistic. Well-known author and science journalist John Horgan gave examples of where the progress of science and technology has been, not exponential, but flat:

  • nuclear fusion
  • ending infectious diseases
  • Richard Nixon’s “war on cancer”
  • gene therapy treatments
  • treating mental illness.

Horgan chided advocates of the singularity for their use of “rhetoric that is more appropriate to religion than science” – thereby risking damaging the standing of science at a time when science needs as much public support as it can get.

Ray Kurzweil, author of “The Singularity is Near”, responded to this by agreeing that not every technology progresses exponentially. However, those that become information sciences do experience significant growth. As medicine and health increasingly become digital information sciences, they are experiencing the same effect. Although in the past I’ve thought that Kurzweil sometimes overstates his case, on this occasion I thought he spoke with clarity and restraint, and with good evidence to back up his claims. He also presented updated versions of the graphs from his book. In the book, these graphs tended to stop around 2002. The slides Kurzweil showed at the summit continued up to 2007. It does appear that the rate of progress with information sciences is continuing to accelerate.

Earlier in the day, science fiction author and former maths and computing science professor Vernor Vinge gave his own explanation for this continuing progress:

Around the world, in many fields of industry, there are hundreds of thousands of people who are bringing the singularity closer, through the improvements they’re bringing about in their own fields of research – such as enhanced human-computer interfaces. They mainly don’t realise they are advancing the singularity – they’re not working to an agreed overriding vision for their work. Instead, they’re doing what they’re doing because of the enormous incremental economic plus of their work.

Under questioning by CNBC editor and reporter Bob Pisani, Vinge said that he sticks with the forecast he made many years ago, that the singularity would (“barring major human disasters”) happen by 2030. Vinge also noted that rapidly improving technology made the future very hard to predict with any certainty. “Classic trendline analysis is seriously doomed.” Planning should therefore focus on scenario evaluation rather than trend lines. Perhaps unsurprisingly, Vinge suggested that more forecasters should read science fiction, where scenarios can be developed and explored. (Since I’m midway through reading and enjoying Vinge’s own most recent novel, “Rainbows End” – set in 2025 – I agree!)

Director of Research at the Singularity Institute, Ben Goertzel, described a staircase of potential applications for the “OpenCog” system of “Artificial General Intelligence” he has been developing with co-workers (partially funded by Google, via the Google Summer of Code):

  • Teaching virtual dogs to dance
  • Teaching virtual parrots to talk
  • Nurturing virtual babies
  • Training virtual scientists that can read vast swathes of academic papers on your behalf
  • And more…

Founder and CSO of Innerspace Foundation, Pete Estep, gave perhaps one of the most thought-provoking presentations. The goal of Innerspace is, in short, to improve brain functioning. In more detail, “To establish bi-directional communication between the mind and external storage devices.” Quoting from the FAQ on the Innerspace site:

The IF [Innerspace Foundation] is dedicated to the improvement of human mind and memory. Even when the brain operates at peak performance learning is slow and arduous, and memory is limited and faulty. Unfortunately, other of the brain’s important functions are similarly challenged in our complex modern world. As we age, these already limited abilities and faculties erode and fail. The IF supports and accelerates basic and applied research and development for improvements in these areas. The long-term goal of the foundation is to establish relatively seamless two-way communication between people and external devices possessing clear data storage and computational advantages over the human brain.

Estep explained that he was a singularity agnostic: “it’s beyond my intellectual powers to decide if a singularity within 20 years is feasible”. However, he emphasised that it is evident to him that “the singularity might be near”. And this changes everything. Throughout history, and extending round the world even today, “there have been too many baseless fantasies and unreasonable rationalisations about the desirability of death”. The probable imminence of the singularity will help people to “escape” from these mind-binds – and to take a more vigorous and proactive stance towards planning and actually building desirable new technology. The singularity that Estep desires is one, not of super-powerful machine intelligence, but one of “AI+BCI: AI combined with a brain-computer interface”. This echoed words from robotics pioneer Hans Moravec that Vernor Vinge had reported earlier in the day:

“It’s not a singularity if you are riding the curve. And I intend to ride the curve.”

On the question of how to proactively improve the chances for beneficial technological development, Peter Diamandis spoke outstandingly well. He’s the founder of the X-Prize Foundation. I confess I hadn’t previously realised anything like the scale and the accomplishment of this Foundation. It was an eye-opener – as, indeed, was the whole day.

Blog at WordPress.com.