dw2

3 June 2012

Super-technology and a possible renaissance of religion

Filed under: death, disruption, Humanity Plus, rejuveneering, religion, Singularity, UKH+ — David Wood @ 11:02 pm

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke

Imagine that the human race avoids self-destruction and continues on the path of increased mastery of technology. Imagine that, as seems credible some time in the future, humans will eventually gain the ability to keep everyone alive indefinitely, in an environment of great abundance, variety, and  intrinsic interest.

That paradise may be a fine outcome for our descendants, but unless the pace of technology improvement becomes remarkably rapid, it seems to have little direct impact on our own lives. Or does it?

It may depend on exactly how much power our god-like descendants eventually acquire.  For example, here are two of the points from a radical vision of the future known as the Ten cosmist convictions:

  • 5) We will develop spacetime engineering and scientific “future magic” much beyond our current understanding and imagination.
  • 6) Spacetime engineering and future magic will permit achieving, by scientific means, most of the promises of religions — and many amazing things that no human religion ever dreamed. Eventually we will be able to resurrect the dead by “copying them to the future”.

Whoa! “Resurrect the dead”, by “copying them to the future”. How might that work?

In part, by collecting enormous amount of data about the past – reconstructing information from numerous sources. It’s similar to collecting data about far-distant stars using a very large array of radio telescopes. And in part, by re-embodying that data in a new environment, similar to copying running software onto a new computer, giving it a new lease of life.

Lots of questions can be asked about the details:

  • Can sufficient data really be gathered in the future, in the face of all the degradation commonly called “the second law of thermodynamics”, that would allow a sufficiently high-fidelity version of me (or anyone else) to be re-created?
  • If a future super-human collected lots of data about me and managed to get an embodiment of that data running on some future super-computer, would that really amount to resurrecting me, as opposed to creating a copy of me?

I don’t think anyone can confident about answers to such questions. But it’s at least conceivable that remarkably advanced technology of the future may allow positive answers.

In other words, it’s at least conceivable that our descendants will have the god-like ability to recreate us in the future, giving us an unexpected prospect for immortality.

This makes sense of the remark by radical futurist and singularitarian Ray Kurzweil at the end of the film “Transcendent Man“:

Does God exist? Well I would say, not yet

Other radical futurists quibble over the “not yet” caveat. In his recent essay “Yes, I am a believer“, Giulio Prisco takes the discussion one stage further:

Gods will exist in the future, and they may be able to affect their past — our present — by means of spacetime engineering. Probably other civilizations out there already attained God-like powers.

Giulio notes that even the celebrated critic of theism, Richard Dawkins, gives some support to this line of thinking.  For example, here’s an excerpt from a 2011 New York Times interview, in which Dawkins discusses an essay written by theoretic physicist Freeman Dyson:

In one essay, Professor Dyson casts millions of speculative years into the future. Our galaxy is dying and humans have evolved into something like bolts of superpowerful intelligent and moral energy.

Doesn’t that description sound an awful lot like God?

“Certainly,” Professor Dawkins replies. “It’s highly plausible that in the universe there are God-like creatures.”

He raises his hand, just in case a reader thinks he’s gone around a religious bend. “It’s very important to understand that these Gods came into being by an explicable scientific progression of incremental evolution.”

Could they be immortal? The professor shrugs.

“Probably not.” He smiles and adds, “But I wouldn’t want to be too dogmatic about that.”

As Giulio points out, Dawkins develops a similar line of argument in part of his book “The God Delusion”:

Whether we ever get to know them or not, there are very probably alien civilizations that are superhuman, to the point of being god-like in ways that exceed anything a theologian could possibly imagine. Their technical achievements would seem as supernatural to us as ours would seem to a Dark Age peasant transported to the twenty-first century…

In what sense, then, would the most advanced SETI aliens not be gods? In what sense would they be superhuman but not supernatural? In a very important sense, which goes to the heart of this book. The crucial difference between gods and god-like extraterrestrials lies not in their properties but in their provenance. Entities that are complex enough to be intelligent are products of an evolutionary process. No matter how god-like they may seem when we encounter them, they didn’t start that way…

Giulio seems more interested in the properties than the provenance. The fact that these entities have god-like powers prompts him to proclaim “Yes, I am a believer“.  He gives another reason in support of that proclamation: In contrast to the views of so-called militant atheists, Giulio is “persuaded that religion can be a powerful and positive force”.

Giulio sees this “powerful and positive force” as applying to him personally as well as to groups in general:

“In my beliefs I find hope, happiness, meaning, the strength to get through the night, and a powerful sense of wonder at our future adventures out there in the universe, which gives me also the drive to try to be a better person here-and-now on this little planet and make it a little better for future generations”.

More controversially, Giulio has taken to describing himself (e.g. on his Facebook page) as a “Christian”. Referring back to his essay, and to the ensuing online discussion:

Religion can, and should, be based on mutual tolerance, love and compassion. Jesus said: “love thy neighbor as thyself,” and added: “let he who is without sin, cast the first stone”…

This is the important part of his teachings in my opinion. Christian theology is interesting, but I think it should be reformulated for our times…

Was Jesus the Son of God? I don’t think this is a central issue. He certainly was, in the sense that we all are, and he may have been one of those persons in tune with the universe, more in tune with the universe than the rest of us, able to glimpse at veiled realities beyond our senses.

I’ve known Giulio for several years, from various Humanity+ and Singularity meetings we’ve both attended – dating back to “Transvision 2006” in Helsinki. I respect him as a very capable thinker, and I take his views seriously. His recent “Yes, I am a believer” article has stirred up a hornets’ nest of online criticism.

Accordingly, I was very pleased that Giulio accepted my invitation to come to London to speak at a London Futurist / Humanity+ UK meeting on Saturday 14th July: “Transhumanist Religions 2.0: New Cosmist religion and spirituality for our boundless future (and our troubled present)”. For all kinds of reason, this discussion deserves a wider airing.

First, I share the view that religious sentiments can provide cohesion and energy to propel individuals and groups to undertake enormously difficult projects (such as the project to avoid the self-destruction of the human race, or any drastic decline in the quality of global civilisation).  The best analysis I’ve read of this point is in the book “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society” by David Sloan Wilson.  As I’ve written previously:

This book has sweeping scope, but makes its case very well.  The case is that religion has in general survived inasmuch as it helped groups of people to achieve greater cohesion and thereby acquire greater fitness compared to other groups of people.  This kind of religion has practical effect, independent of whether or not its belief system corresponds to factual reality.  (It can hardly be denied that, in most cases, the belief system does not correspond to factual reality.)

The book has some great examples – from the religions in hunter-gatherer societies, which contain a powerful emphasis on sharing out scarce resources completely equitably, through examples of religions in more complex societies.  The chapter on John Calvin was eye-opening (describing how his belief system brought stability and prosperity to Geneva) – as were the sections on the comparative evolutionary successes of Judaism and early Christianity.  But perhaps the section on the Balinese water-irrigation religion is the most fascinating of the lot.

Of course, there are some other theories for why religion exists (and is so widespread), and this book gives credit to these theories in appropriate places.  However, this pro-group selection explanation has never before been set out so carefully and credibly, and I think it’s no longer possible to deny that it plays a key role.

The discussion makes it crystal clear why many religious groups tend to treat outsiders so badly (despite treating insiders so well).  It also provides a fascinating perspective on the whole topic of “forgiveness”.  Finally, the central theme of “group selection” is given a convincing defence.

But second, there’s no doubt that religion can fit blinkers over people’s thinking abilities, and prevent them from weighing up arguments dispassionately. Whenever people talk about the Singularity movement as having the shape of a religion – with Ray Kurzweil as a kind of infallible prophet – I shudder. But we needn’t lurch to that extreme. We should be able to maintain the discipline of rigorous independent thinking within a technologically-informed renaissance of positive religious sentiment.

Third, if the universe really does have beings with God-like powers, what attitude should we adopt towards these beings? Should we be seeking in some way to worship them, or placate them, or influence them? It depends on whether these beings are able to influence human history, here and now, or whether they are instead restricted (by raw facts of space and time that even God-like beings have to respect) to observing us and (possibly) copying us into the future.

Personally my bet is on the latter choice. For example, I’m not convinced by people who claim evidence to the contrary. And if these beings did have the ability to intervene in human history, but have failed to do so, it would be evidence of them having scant interest in widespread intense human suffering. They would hardly be super-beings.

In that case, the focus of our effort should remain squarely on building the right conditions for super-technology to benefit humanity as a whole (this is the project I call “Inner Humanity+“), rather than on somehow seeking to attract the future attention of these God-like beings. But no doubt others will have different views!

16 October 2011

Human regeneration – limbs and more

Filed under: healthcare, medicine, rejuveneering, risks, Singularity — David Wood @ 1:57 am

Out of the many interesting presentations on Day One of the 2011 Singularity Summit here in New York, the one that left me with the most to think about was “Regenerative Medicine: Possibilities and Potential” by Dr. Stephen Badylak.

Dr Badylak is deputy director of the McGowan Institute for Regenerative Medicine, and a Professor in the Department of Surgery at the University of Pittsburg. In his talk at the Singularity Summit, he described some remarkable ways in which the human body could heal itself – provided we provide it with suitable “scaffolding” that triggers the healing.

One of the examples Dr Badylak discussed is also covered in a recent article in Discover Magazine, How Pig Guts Became the Next Bright Hope for Regenerating Human Limbs.  The article deserves reading all the way through. Here are some short extracts from the beginning:

When he first arrived in the trauma unit of San Antonio’s Brooke Army Medical Center in December 2004, Corporal Isaias Hernandez’s leg looked to him like something from KFC. “You know, like when you take a bite out of the drumstick down to the bone?” Hernandez recalls. The 19-year-old Marine, deployed in Iraq, had been trying to outfit his convoy truck with a makeshift entertainment system for a long road trip when the bomb exploded. The 12-inch TV he was clutching to his chest shielded his vital organs; his buddy carrying the DVDs wasn’t so lucky.

The doctors kept telling Hernandez he would be better off with an amputation. He would have more mobility with a prosthetic, less pain. When he refused, they took a piece of muscle from his back and sewed it into the hole in his thigh. He did all he could to make it work. He grunted and sweated his way through the agony of physical therapy with the same red-faced determination that got him through boot camp. He even sneaked out to the stairwell, something they said his body couldn’t handle, and dragged himself up the steps until his leg seized up and he collapsed.

Generally people never recovered from wounds like his. Flying debris had ripped off nearly 70 percent of Hernandez’s right thigh muscle, and he had lost half his leg strength. Remove enough of any muscle and you might as well lose the whole limb, the chances of regeneration are so remote. The body kicks into survival mode, pastes the wound over with scar tissue, and leaves you to limp along for life….

Hernandez recalled that one of his own doctors—Steven Wolf, then chief clinical researcher for the United States Army Institute of Surgical Research in Texas—had once mentioned some kind of experimental treatment that could “fertilize” a wound and help it heal. At the time, Hernandez had dismissed the therapy as too extreme. The muscle transplant sounded safer, easier. Now he changed his mind. He wanted his leg back, even if it meant signing himself up as a guinea pig for the U.S. Army.

So Hernandez tracked down Wolf, and in February 2008 the two got started. First, Wolf put Hernandez through another grueling course of physical therapy to make sure he had indeed pushed any new muscle growth to the limit. Then he cut open Hernandez’s thigh and inserted a paper-thin slice of the same material used to make the pixie dust: part of a pig’s bladder known as the extracellular matrix, or ECM, a fibrous substance that occupies the spaces between cells. Once thought to be a simple cellular shock absorber, ECM is now understood to contain powerful proteins that can reawaken the body’s latent ability to regenerate tissue.

A few months after the surgery healed, Wolf assigned the young soldier another course of punishing physical therapy. Soon something remarkable began to happen. Muscle that most scientists would describe as gone forever began to grow back. Hernandez’s muscle strength increased by 30 percent from what it was before the surgery, and then by 40 percent. It hit 80 percent after six months. Today it is at 103 percent—as strong as his other leg. Hernandez can do things that were impossible before, like ease gently into a chair instead of dropping into it, or kneel down, ride a bike, and climb stairs without collapsing, all without pain

The challenge now is replicating Hernandez’s success in other patients. The U.S. Department of Defense, which received a congressional windfall of $80 million to research regenerative medicine in 2008, is funding a team of scientists based at the University of Pittsburgh’s McGowan Institute for Regenerative Medicine to oversee an 80-patient study of ECM at five institutions. The scientists will attempt to use the material to regenerate the muscle of patients who have lost at least 40 percent of a particular muscle group, an amount so devastating to limb function that it often leads doctors to perform an amputation.

If the trials are successful, they could fundamentally change the way we treat patients with catastrophic limb injuries. Indeed, the treatment might someday allow patients to regrow missing or mangled body parts. With an estimated 1.7 million people in the United States alone missing limbs, promoters of regenerative medicine eagerly await the day when therapies like ECM work well enough to put the prosthetics industry out of business.

The interesting science is the explanation of the role of the ECM – the extracellular matrix, which provides the scaffolding that allows the healing to take place. The healing turns out to involve the body directing stem cells to the scaffolding. These stem cells then differentiate into muscle cells, nerve cells, blood cells, and so on. There’s also some interesting science to explain why the body doesn’t reject the ECM that’s inserted into it.

Badylak speaks with confidence of the treatment one day allowing the regeneration of damaged human limbs, akin to what happens with salamanders.  He also anticipates the healing of brain tissue damaged by strokes.

Later that morning, another speaker at the Singularity Summit, Michael Shermer, referred to Dr Badylak’s presentation. Shermer is a well-known sceptic – indeed, he’s the publisher of Skeptic magazine.  Shermer often participates in public debates with believers in various religions and new-age causes.  Shermer mentioned that, at these debates, his scientific open mindedness is sometimes challenged.  “OK, if you are open-minded, as you claim, what evidence would make you believe in God?”  Shermer typically gives the answer that, if someone with an amputated limb were to have that limb regrow, that would be reason for him to become a believer:

Most religious claims are testable, such as prayer positively influencing healing. In this case, controlled experiments to date show no difference between prayed-for and not-prayed-for patients. And beyond such controlled research, why does God only seem to heal illnesses that often go away on their own? What would compel me to believe would be something unequivocal, such as if an amputee grew a new limb. Amphibians can do it. Surely an omnipotent deity could do it. Many Iraqi War vets eagerly await divine action.

However, Shermer joked with the Singularity Summit audience, it now appears that Dr Badylak might be God.  The audience laughed.

But there’s a serious point at stake here. The Singularity Summit is full of talks about humans being on the point of gaining powers that, in previous ages, would have been viewed as Divine. With great power comes great responsibility. As veteran ecologist and environmentalist Stewart Brand wrote at the very start of his recent book “Whole Earth Discipline“,

We are as gods and HAVE to get good at it.

In the final talk of the day, cosmologist Professor Max Tegmark addressed the same theme.  He gave an estimate of “between 1/10 and 1/10,000” for the probability of human extinction during any decade in the near-term future – extinction arising from (for example) biochemical warfare, runaway global warming, nanotech pollution, or a bad super-intelligence singularity. In contrast, he said, only a tiny fraction of the global GDP is devoted to management of existential risks.  That kind of “lack of paying attention” meant that humanity deserved, in Tegmark’s view, a “mid-term rating” of just D-.  Our focus, far too much of the time, is on the next election cycle, or the next quarterly financial results, or other short term questions.

One person who is seeking to encourage greater attention to be paid to existential risks is co-founder of Skype, Jaan Tallinn (who earlier in the year gave a very fine talk at a Humanity+ event I organised in London).  Jaan’s main presentation at the 2011 Singularity Summit will be on Day Two, but he briefly popped up on stage on Day One to announce a significant new fundraising commitment: he will personally match any donations made over the weekend to the Singularity Institute, up to a total of $100,000.

With the right resources, wisely deployed, we ought to see collective human intelligence achieve lots more regeneration – not just of broken limbs, but also of troubled societies and frustrated lives – whilst at the same time steering humanity away from the existential risks latent in these super-powerful technologies.  The discussion will continue tomorrow.

29 July 2011

Towards a mind-stretching weekend in New York

Filed under: AGI, futurist, leadership, nanotechnology, robots, Singularity — David Wood @ 9:19 pm

I’ve attended the annual Singularity Summit twice before – in 2008 and in 2009.  I’ve just registered to attend the 2011 event, which is taking place in New York on 15th-16th October.  Here’s why.

On both previous occasions, the summits featured presentations that gave me a great deal to think about, on arguably some of the most significant topics in human history.  These topics include the potential emergence, within the lifetimes of many people alive today, of:

  • Artificial intelligence which far exceeds the capabilities of even the smartest group of humans
  • Robots which far exceed the dexterity, balance, speed, strength, and sensory powers of even the best human athletes, sportspeople, or soldiers
  • Super-small nanobots which can enter the human body and effect far more thorough repairs and enhancements – to both body and mind – than even the best current medical techniques.

True, at the previous events, there were some poor presentations too – which is probably inevitable given the risky cutting-edge nature of the topics being covered.  But the better presentations far outweighed the worse ones.

And as well as the presentations, I greatly enjoyed the networking with the unusual mix of attendees – people who had taken the time to explore many of the fascinating hinterlands of modern technology trends.  If someone is open-minded enough to give serious thought to the ideas listed above, they’re often open-minded enough to entertain lots of other unconventional ideas too.  I frequently found myself in disagreement with these attendees, but the debate was deeply refreshing.

Take a look at the list of confirmed speakers so far: which of these people would you most like to bounce ideas off?

The summit registration page is now open.  As I type these words, that page states that the cost of tickets is going to increase after 31 July.  That’s an argument for registering sooner rather than later.

To provide more information, here’s a copy of the press release for the event:

Singularity Summit 2011 in New York City to Explore Watson Victory in Jeopardy

New York, NY This October 15-16th in New York City, a TED-style conference gathering innovators from science, industry, and the public will discuss IBM’s ‘Watson’ computer and other exciting developments in emerging technologies. Keynote speakers at Singularity Summit 2011 include Jeopardy! champion Ken Jennings and famed futurist and inventor Ray Kurzweil. After losing to an IBM computer in Jeopardy!, Jennings wrote, “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

In February, Watson defeated two human champions in Jeopardy!, the game show famous for its mind-bending trivia questions. Surprising millions of TV viewers, Watson took down champions Ken Jennings and Brad Rutter for the $1 million first prize. Facing defeat on the final show, competitor Ken Jennings jokingly wrote in parentheses on his last answer: “I for one welcome our new computer overlords.” Besides Watson, the Singularity Summit 2011 will feature speakers on robotics, nanotechnology, biotechnology, futurism, and other cutting-edge technologies, and is the only conference to focus on the technological Singularity.

Responding to Watson’s victory, leading computer scientist Ray Kurzweil said, “Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.” In Kurzweil’s view, the combination of language understanding and pattern recognition that Watson displays would make its descendants “far superior to a human”. Kurzweil is known for predicting computers whose conversations will be indistinguishable from people by 2029.

Beyond artificial intelligence, the Singularity Summit will also focus on high-tech and where it is going. Economist Tyler Cowen will examine the economic impacts of emerging technologies. Cowen argued in his recent book The Great Stagnation that modern society is on a technological plateau where “a lot of our major innovations are springing up in sectors where a lot of work is done by machines, not by human beings.” Tech entrepreneur and investor Peter Thiel, who sits on the board of directors of Facebook, will share his thoughts on innovation and jumpstarting the economy.

Other speakers include MIT cosmologist Max Tegmark, Allen Brain Institute chief scientist Christof Koch, co-founder of Skype Jaan Tallinn, robotics professors James McLurkin and Robin Murphy, Bionic Builders host Casey Pieretti, the MIT Media Lab’s Riley Crane, MIT polymath Alexander Wissner-Gross, filmmaker and television personality Jason Silva, and Singularity Institute artificial intelligence researcher Eliezer Yudkowsky.

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

19 March 2011

A singularly fine singularitarian panel?

Filed under: futurist, Humanity Plus, Kurzweil, Singularity — David Wood @ 12:37 pm

In a moment, I’ll get to the topic of a panel discussion on the Singularity – a panel I’ve dubbed (for reasons which should become clear) “Post Transcendent Man“. It’s a great bunch of speakers, and I’m expecting an intellectual and emotional mindfest.  But first, some background.

In the relatively near future, I expect increasing numbers of people to navigate the sea change described recently by writer Philippe Verdoux in his article Transhumanists coming out of the closet:

It wasn’t that long ago that listing transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on, as interests on one’s CV might result in a bit of embarrassment.

Over just the past decade and a half, though, there seems to have been a sea change in how these issues are perceived by philosophers and others: many now see them as legitimate subjects of research; they have, indeed, acquired a kind of academic respectability that they didn’t previously possess.

There are no doubt many factors behind this shift. For one, it seems to be increasingly apparent, in 2011, that technology and biology are coming together to form a new kind of cybernetic unity, and furthermore that such technologies can be used to positively enhance (rather than merely alter) features of our minds and bodies.

In other words, the claim that humans can “transcend” (a word I don’t much like, by the way) our biological limitations through the use of enhancement technologies seems to be increasingly plausible – that is, empirically speaking.

Thus, it seems to be a truism about our contemporary world that technology will, in the relatively near future, enable us to alter ourselves in rather significant ways. This is one reason, I believe, that more philosophers are taking transhumanism seriously…

On a personal note, when I first discovered transhumanism, I was extremely skeptical about its claims (which, by the way, I think every good scientific thinker should be). I take it that transhumanism makes two claims in particular, the first “descriptive” and the second “normative”: (i) that future technologies will make it possible for us to radically transform the human organism, potentially enabling us to create a new species of technologized “posthumans”; and (ii) that such a future scenario is preferable to all other possible scenarios. In a phrase: we not only can but ought to pursue a future marked by posthumanity…

One factor that leads people to pay more serious attention to this bundle of ideas – transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on – is the increasing coverage of these ideas in thoughtful articles in the mainstream media.  In turn, many of these articles have been triggered by the film Transcendent Man by director Barry Ptolemy, featuring the groundbreaking but controversial ideas and projects of inventor and futurist Ray Kurzweil.  Here’s a trailer for the film:

The film has received interesting commentary in, among other places:

I had mixed views when watching the movie myself:

  • On the one hand, it contains a large number of profound sound bites – statements made by many of the talking heads on screen; any of these sound bites could, potentially, change someone’s life, if they reflect on the implications;
  • The film also covers many details of Kurzweil’s own biography, with archive footage of him at different stages of his career – this filled in many gaps in my own understanding, and gave me renewed respect for what he has accomplished as a professional;
  • On the other hand, although there are plenty of critical comments among the sound bites – comments highlighting potential problems or issues with Kurzweil’s ideas – the film never really lets the debate fly;
  • I found myself thinking – yes, that’s an interesting and important point, now let’s explore this further – but then the movie switched to a different frame.

The movie has its official UK premier at the London Science Museum on Tuesday 5th April.  Kurzweil himself will be in attendance, to answer questions raised by the audience.  The last time I checked, tickets were sold out.

Post Transcendent Man

To drill down more deeply into the potentially radical implications of Kurzweil’s ideas and projects, the UK chapter of Humanity+ has arranged an event in  Birkbeck College (WC1E 7HX), Torrington Square in Central London on the afternoon (2pm-4.15pm) of Saturday 9th April.  We’ll be in Malet Street lecture room B34 – which seats a capacity audience of 177 people.  For more details about logistics, registration, and so on, see the official event website, or the associated Facebook page.

The event is privileged to feature an outstanding set of speakers and panellists who represent a range of viewpoints about the Singularity, transhumanism, and human transcendence.  In alphabetical order by first name:

Dr Anders Sandberg is a James Martin research fellow at the Future of Humanity Institute at Oxford University. As a part of the Oxford Martin School he is involved in interdisciplinary research on cognitive enhancement, neurotechnology, global catastrophic risks, emerging technologies and applied rationality. He has been writing about and debating transhumanism, future studies, neuroethics and related questions for a long time. He is also an associate of the Oxford Centre for Neuroethics and the Uehiro Centre for Practical Ethics, as well as co-founder of the Swedish think tank Eudoxa.

Jaan Tallinn is one of the programmers behind Kazaa and a founding engineer of Skype. He is also a partner in Ambient Sound Investments as well as a member of the Estonian President’s Academic Advisory Board. He describes himself as singularitarian/hacker/investor/physicist (in that order). In recent years Jaan has found himself closely following and occasionally supporting the work that SIAI and FHI are doing. He agrees with Kurzweil in that the topic of Singularity can be extremely counterintuitive to general public, and has tried to address this problem in a few public presentations at various venues.

Nic Brisbourne is a partner at venture capital fund DFJ Esprit and blogger on technology and startup issues at The Equity Kicker. As such he’s interested in when technology and science projects become products and businesses. He has a personal interest in Kurzweil’s ideas and longevity in particular and he says he’s keen to cross the gap from personal to professional and find exciting startups generating products in this area, although he thinks that the bulk of the commercialisation opportunities are still a year or two out.

Paul Graham Raven is a writer, literary critic and bootstrap big-picture futurist; he prods regularly at the fuzzy boundary of the unevenly-distributed future at futurismic.com. He is Editor-in-Chief and Publisher of The Dreaded Press, a rock music reviews webzine, and Publicist and PR officer for PS Publishing – perhaps the UK’s foremost boutique genre publisher. He says he’s also a freelance web-dev to the publishing industry, a cack-handed fuzz-rock guitarist, and in need of a proper haircut.

Russell Buckley is a leading practitioner, speaker and thinker about mobile and mobile marketing. MobHappy, his blog about mobile technology, is one of the most established focusing on this area. He is also a previous Global Chairman of the Mobile Marketing Association, a founder of Mobile Monday in Germany and holds numerous non-executive positions in mobile technology companies. Russell learned about mobile advertising startup, AdMob, soon after its launch, and joined as its first employee in 2006, with the remit of launching AdMob into the EMEA market. Four years later, AdMob was sold to Google for $750m. By night though, Russell is fascinated by the socio-political implications of technology and recently graduated from the Executive Program at the Singularity University, founded by Ray Kurtzweil and Peter Diamandis to “educate and inspire leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges”.

The discussion continues

The event will start, at 2pm, with the panellists introducing themselves, and their core thinking about the topics under discussion.  As chair, I’ll ask a few questions, and then we’ll open up for questions and comments from the audience.  I’ll be particularly interested to explore:

  • How people see the ideas of accelerating technology making a difference in their own lives – both personally or professionally.  Three of us on the stage were on founding teams of companies that made sizeable waves in the technology world (Jaan Tallinn, Skype; Russell Buckley, AdMob; myself, Symbian).  Where do we see rapidly evolving technology (as often covered by Kurzweil) taking us next?
  • People’s own experiences with bodies such as the Singularity University, the Singularity Institute, and the Future of Humanity Institute at Oxford University.  Are these bodies just talking shops?  Are they grounded in reality?  Are they making a substantial positive difference in how humanity responds to the issues and challenges of technology?
  • Views as to the best way to communicate ideas like the Singularity – favourite films, science fiction, music, and other media.  How does the move “Transcendent Man” compare?
  • Reservations and worries (if any) about the Singularity movement and the ways in which Kurzweil expresses his ideas.  Are the parallels with apocalyptic religions too close for comfort?
  • Individuals’ hopes and aspirations for the future of technology.  What role do they personally envision playing in the years ahead?  And what timescales do they see as credible?
  • Calls to action – what (if anything) should members of the audience change about their lives, in the light of analysing technology trends?

Which questions do you think are the most important to raise?

Request for help

If you think this is an important event, I have a couple of suggestions for you:

The discussion continues (more)

Dean Bubley, founder of Disruptive Analysis and a colleague of mine from the mobile telecomms industry, has organised the “Inaugural UK Humanity+ Evening Salon” on Wednesday April 13th, from 7pm to 10pm.  Dean describes it as follows:

Interested in an evening discussing the future of the human species & society? Aided by a drink or two?

This is the first “salon” event for the London branch of “Humanity Plus”, or H+ for short. It’s going to be an informal evening event involving a stimulating guest speaker, Q&A and lively discussion, all aided by a couple of drinks. It fits alongside UKH+’s larger Saturday afternoon lecture sessions, and occasional all-day major conferences…

It will be held in central London, in a venue TBC closer to the time. Please contact Dean Bubley (facebook.com/bubley), the convener & moderator, for more details.

For more details, see the corresponding Facebook page, and RSVP there so that Dean has an idea of the likely numbers.

15 March 2010

Imagining a world without money

Filed under: Economics, futurist, motivation, politics, Singularity, vision, Zeitgeist — David Wood @ 11:48 am

On Saturday, I attended “London Z Day 2010” – described as

presentations about futurism and technology, the singularity and the current economic landscape, activism and how to get involved…

Around 300 people were present in the Oliver Thompson Lecture Theatre of London’s City University.  That’s testimony to good work by the organisers – the UK chapter of the worldwide “Zeitgeist Movement“.

I liked a lot of what I heard – a vision that advocates greater adoption of:

  • Automation: “Using technology to automate repetitive and tedious tasks leads to efficiency and productivity. It is also socially responsible as people are freed from labor that undermines their intelligence”
  • Artificial intelligence: “machines can take into account more information”
  • The scientific method: “a proven method that has stood the test of time and leads to discovery. Scientific method involves testing, getting feedback from natural world and physical law, evaluation of results, sharing data openly and requirement to replicate the test results”
  • Technological unification: “Monitoring planetary resources is needed in order to create an efficient system, and thus technology should be shared globally”.

I also liked the sense of urgency and activism, to move swiftly from the current unsustainable social and economic frameworks, into a more rational framework.  Frequent references of work of radical futurists like Ray Kurzweil emphasised the plausibility of rapid change, driven by accelerating technological innovation.  That makes good sense.

I was less convinced by other parts of the Zeitgeist worldview – in particular, its strong “no money” and “no property” messages.

Could a society operate without money?  Speakers from the floor seemed to think that, in a rationally organised society, everyone would be able to freely access all the goods and services they need, rather than having to pay for them.  The earth has plenty of resources, and we just need to look after them in a sensible way.  Money has lots of drawbacks, so we should do without it – so the argument went.

One of the arguments made by a speaker, against a monetary basis of society, was the analysis from the recent book “The Spirit Level: Why More Equal Societies Almost Always Do Better” by Richard Wilkinson and Kate Pickett.  Here’s an excerpt of a review of this book from the Guardian:

We are rich enough. Economic growth has done as much as it can to improve material conditions in the developed countries, and in some cases appears to be damaging health. If Britain were instead to concentrate on making its citizens’ incomes as equal as those of people in Japan and Scandinavia, we could each have seven extra weeks’ holiday a year, we would be thinner, we would each live a year or so longer, and we’d trust each other more.

Epidemiologists Richard Wilkinson and Kate Pickett don’t soft-soap their message. It is brave to write a book arguing that economies should stop growing when millions of jobs are being lost, though they may be pushing at an open door in public consciousness. We know there is something wrong, and this book goes a long way towards explaining what and why.

The authors point out that the life-diminishing results of valuing growth above equality in rich societies can be seen all around us. Inequality causes shorter, unhealthier and unhappier lives; it increases the rate of teenage pregnancy, violence, obesity, imprisonment and addiction; it destroys relationships between individuals born in the same society but into different classes; and its function as a driver of consumption depletes the planet’s resources.

Wilkinson, a public health researcher of 30 years’ standing, has written numerous books and articles on the physical and mental effects of social differentiation. He and Pickett have compiled information from around 200 different sets of data, using reputable sources such as the United Nations, the World Bank, the World Health Organisation and the US Census, to form a bank of evidence against inequality that is impossible to deny.

They use the information to create a series of scatter-graphs whose patterns look nearly identical, yet which document the prevalence of a vast range of social ills. On almost every index of quality of life, or wellness, or deprivation, there is a gradient showing a strong correlation between a country’s level of economic inequality and its social outcomes. Almost always, Japan and the Scandinavian countries are at the favourable “low” end, and almost always, the UK, the US and Portugal are at the unfavourable “high” end, with Canada, Australasia and continental European countries in between.

This has nothing to do with total wealth or even the average per-capita income. America is one of the world’s richest nations, with among the highest figures for income per person, but has the lowest longevity of the developed nations, and a level of violence – murder, in particular – that is off the scale. Of all crimes, those involving violence are most closely related to high levels of inequality – within a country, within states and even within cities. For some, mainly young, men with no economic or educational route to achieving the high status and earnings required for full citizenship, the experience of daily life at the bottom of a steep social hierarchy is enraging…

The anxiety in this book about our current economic system was reflected in anxiety expressed by all the Zeitgeist Movement speakers.  However, the Zeitgeist speakers drew a more radical conclusion.  It’s not just that economic inequalities have lots of bad side effects.  They say, it’s money-based economics itself that causes these problems.  And that’s a hard conclusion to swallow.

They don’t argue for reforming the existing economic system.  Rather, they argue for replacing it completely.  Money itself, they say, is the root problem.

The same dichotomy arose time and again during the day.  Speakers highlighted many problems with the way the world currently operates.  But instead of advocating incremental reforms – say, for greater equality, or for oversight of the market – they advocated a more radical transformation: no money, and no property.  What’s more, the audience seemed to lap it all up.

Of course, money has sprung up in countless societies throughout history, as something that allows for a more efficient exchange of resources than simple bartering.  Money provides a handy intermediate currency, enabling more complex transactions of goods and services.

In answer, the Zeitgeist speakers argue that use of technology and artificial intelligence would allow for more sensible planning of these goods and services.  However, horrible thoughts come to mind of all the failures of previous centrally controlled economies, such as in Soviet times.  In answer again, the Zeitgeist speakers seem to argue that better artificial intelligence will, this time, make a big difference.  Personally, I’m all in favour of gradually increased application of improved automatic decision systems.  But I remain deeply unconvinced about removing money:

  1. Consumer desires can be very varied.  Some people particularly value musical instruments, others foreign travel, others sports equipment, others specialist medical treatment, and so on.  What’s more, the choices are changing all the time.  Money is a very useful means for people to make their own, individual choices
  2. A speaker from the floor suggested that everyone would have access to all the medical treatment they needed.  That strikes me as naive: the amount of medical treatment potentially available (and potentially “needed” in different cases) is unbounded
  3. Money-based systems enable the creation of loans, in which banks lend out more money than they have in their assets; this has downsides but also has been an important spring to growth and development;
  4. What’s more, without the incentive of being able to earn more money, it’s likely that a great deal of technological progress would slow down; many people would cease to work in such a focused and determined way to improve the products their company sells.

For example, the Kurzweil curves showing the projected future improvements in technology – such as increased semiconductor density and computational capacity – will very likely screech to a halt, or dramatically slow down, if money is removed as an incentive.

So whilst the criticism offered by the Zeitgeist movement is strong, the positive solution they advocate lacks many details.

As Alan Feuer put it, in his New York Times article reviewing last year’s ZDay, “They’ve Seen the Future and Dislike the Present“:

The evening, which began at 7 with a two-hour critique of monetary economics, became by midnight a utopian presentation of a money-free and computer-driven vision of the future, a wholesale reimagination of civilization, as if Karl Marx and Carl Sagan had hired John Lennon from his “Imagine” days to do no less than redesign the underlying structures of planetary life.

Idealism can be a powerful force for positive social change, but can be deeply counterproductive if it’s based on a misunderstanding of what’s possible.  I’ll need a lot more convincing about the details of the zero-money “resource based economy” advocated by Zeitgeist before I could give it any significant support.

I’m a big fan of debating ideas about the future – especially radical and counter-intuitive ideas.  There’s no doubt that, if we are to survive, the future will need to be significantly different from the past.  However, I believe we need to beware the kind of certainty that some of the Zeitgeist speakers showed.  The Humanity+, UK2010 conference, to be held in London on 24th April, will be an opportunity to review many different ideas about the best actions needed to create a social environment more conducive to enabling the full human potential.

Footnote: an official 86 page PDF “THE ZEITGEIST MOVEMENT – OBSERVATIONS AND RESPONSES: Activist Orientation Guide” is available online.

The rapid growth of the Zeitgeist Movement has clearly benefited from popular response to two movies, “Zeitgeist, the Movie” (released in 2007) and “Zeitgeist: Addendum” (released in 2008).  Both these movies have gone viral.  There’s a great deal in each of these movies that makes me personally uncomfortable.  However, one learning is simply the fact that well made movies can do a great deal to spread a message.

For an interesting online criticism of some of the Zeitgeist Movements ideas, see “Zeitgeist Addendum: The Review” by Stefan Molyneux from Freedomain Radio.

31 January 2010

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

11 January 2010

AI, buggy software, and the Singularity

Filed under: AGI, Singularity — David Wood @ 12:00 am

I recently looked at three questions about the feasibility of significant progress with AI.  I’d like to continue that investigation, by looking at four more questions.

Q4: Given that all software is buggy, won’t this prevent the creation of any viable human-level AI?

Some people with a long involvement with software aren’t convinced that we can write software of sufficient quality that is of the complexity required for AI at the human-level (or beyond).  It seems to them that complex software is too unreliable.

It’s true that the software we use on a day-by-day basis – whether on a desktop computer, on a mobile phone, or via a web server – tends to manifest nasty bugs from time to time.  The more complex the system, the greater the likelihood of debilitating defects in the interactions between different subcomponents.

However, I don’t see this observation as ruling out the development of software that can manifest advanced AI.  That’s for two reasons:

First, different software projects vary in their required quality level.  Users of desktop software have become at least partially tolerant of defects in that software.  As users, we complain, but it’s not the end of the world, and we generally find workarounds.  As a result, manufacturers release software even though there’s still bugs in it.  However, for mission-critical software, the quality level is pushed a lot higher.  Yes, it’s harder to create software with high-reliability; but it can be done.

There are research projects underway to bring significantly higher quality software to desktop systems too.  For example, here’s a description of a Microsoft Research project, which is (coincidentally) named “Singularity”:

Singularity is a research project focused on the construction of dependable systems through innovation in the areas of systems, languages, and tools. We are building a research operating system prototype (called Singularity), extending programming languages, and developing new techniques and tools for specifying and verifying program behavior.

Advances in languages, compilers, and tools open the possibility of significantly improving software. For example, Singularity uses type-safe languages and an abstract instruction set to enable what we call Software Isolated Processes (SIPs). SIPs provide the strong isolation guarantees of OS processes (isolated object space, separate GCs, separate runtimes) without the overhead of hardware-enforced protection domains. In the current Singularity prototype SIPs are extremely cheap; they run in ring 0 in the kernel’s address space.

Singularity uses these advances to build more reliable systems and applications. For example, because SIPs are so cheap to create and enforce, Singularity runs each program, device driver, or system extension in its own SIP. SIPs are not allowed to share memory or modify their own code. As a result, we can make strong reliability guarantees about the code running in a SIP. We can verify much broader properties about a SIP at compile or install time than can be done for code running in traditional OS processes. Broader application of static verification is critical to predicting system behavior and providing users with strong guarantees about reliability.

There would be a certain irony if techniques from the Microsoft Singularity project were used to create a high-reliability AI system that in turn was involved in the Technological Singularity.

Second, even if software has defects, that doesn’t (by itself) prevent it from it from being intelligent.  After all, the human brain itself has many defects – see my blogpost “The human mind as a flawed creation of nature“.  Sometimes we think much better after a good night’s rest!  The point is that the AI algorithms can include aspects of fault tolerance.

Q5: Given that we’re still far from understanding the human mind, aren’t we bound to be a long way from creating a viable human-level AI?

It’s often said that the human mind has deeply mysterious elements, such as consciousness, self-awareness, and free will.  Since there’s little consensus about these aspects of the human mind, it’s said to be unlikely that a computer emulation of these features will arrive any time soon.

However, I disagree that we have no understanding of these aspects of the human mind.  There’s a broad consensus among many philosophers and practitioners alike, that the main operation of the human mind is well explained by one or other variant of  “physicalism”.  As the Wikipedia article on the Philosophy of Mind states:

Most modern philosophers of mind adopt either a reductive or non-reductive physicalist position, maintaining in their different ways that the mind is not something separate from the body. These approaches have been particularly influential in the sciences, especially in the fields of sociobiology, computer science, evolutionary psychology and the various neurosciences…

Reductive physicalists assert that all mental states and properties will eventually be explained by scientific accounts of physiological processes and states. Non-reductive physicalists argue that although the brain is all there is to the mind, the predicates and vocabulary used in mental descriptions and explanations are indispensable, and cannot be reduced to the language and lower-level explanations of physical science. Continued neuroscientific progress has helped to clarify some of these issues.

The book I mentioned previously, “Beyond AI” by J Storrs Hall, devotes several chapters to filling in aspects of this explanation.

It’s true that there’s still scope for head-scratching debates on what philosopher David Chalmers calls “the hard problem of consciousness”, which has various formulations:

  • “Why should physical processing give rise to a rich inner life at all?”
  • “How is it that some organisms are subjects of experience?”
  • “Why does awareness of sensory information exist at all?”
  • “Why is there a subjective component to experience?”…

However, none of these questions, by themselves, should prevent the construction of a software system that will be able to process questions posed in natural human language, and to give high quality humanly-understandable answers.  When that happens, the system will very probably seek to convince us that it has a similar inner conscious life to the one we have.  As J Storr Halls says, we’ll probably believe it.

Q6: Is progress with narrow fields of AI really relevant to the problem of general AI?

Martin Budden comments:

I don’t consider the advances in machine translation over the past decade an advance in AI, I more consider them the result of brute force analysis on huge quantities of text. I wouldn’t consider a car that could safely drive itself along a motorway an advance in AI, rather it would be the integration of a number of existing technologies. I don’t really consider the improvement of an algorithm that does a specific thing (search, navigate, play chess) an advance in AI, since generally such an improvement cannot be used outside its narrow field of application.

My own view is that these advances do help, in the spirit of “divide and conquer”.  I see the human mind as being made up of modules, rather than being some intractable whole.  Improving ability in, for example, translating text, or in speech recognition, will help set the scene for eventual general AI.

It’s true that some aspects of the human mind will prove harder to emulate than others – such as the ability to notice and form new concepts.  It may be the case that a theoretical breakthrough with this aspect will enable much faster overall progress, which will be able to leverage the work done on other modules.

Q7: With so many unknowns, isn’t all this speculation about AI futile?

It’s true that no one can predict, with any confidence, the date at which specific breakthrough advances in general AI are likely to happen.  The best that someone can achieve is a distribution of different dates with different probabilities.

However, I don’t accept any argument that “there’s been no fundamental breakthroughs in the last sixty years, so there can’t possibly be any fundamental breakthroughs in (say) the next ten years”.  That would be an invalid extrapolation.

That would be similar to the view expressed in 1903 by the distinguished astronomer and mathematician Simon Newcomb:

“Aerial flight is one of that class of problems with which man can never cope.”

Newcomb was no fool: he had good reasons for his scepticism.  As explained in the Wikipedia article about Newcomb:

In the October 22, 1903 issue of The Independent, Newcomb wrote that even if a man flew he could not stop. “Once he slackens his speed, down he begins to fall. Once he stops, he falls as a dead mass.” In addition, he had no concept of an airfoil. His “aeroplane” was an inclined “thin flat board.” He therefore concluded that it could never carry the weight of a man. Newcomb was specifically critical of the work of Samuel Pierpont Langley, who claimed that he could build a flying machine powered by a steam engine and whose initial efforts at flight were public failures…

Newcomb, apparently, was unaware of the Wright Brothers efforts whose [early] work was done in relative obscurity.

My point is that there does not seem to be any valid fundamental reason why the functioning of a human mind cannot be emulated via software; we may be just two or three good breakthroughs away from solving the remaining key challenges.  With the close attention of many commercial interests, and with the accumulation of fragments of understanding, the chances improve of some of these breakthroughs happening sooner rather than later.

7 December 2009

Bangalore and the future of AI

Filed under: AGI, Bangalore, Singularity — David Wood @ 3:15 pm

I’m in the middle of a visit to the emerging hi-tech centre of excellence, Bangalore.  Today, I heard suggestions, at the Forum Nokia Developer Conference happening here, that Bangalore could take on many of the roles of Silicon Valley, in the next phase of technology entrepreneurship and revolution.

I can’t let the opportunity of this visit pass by, without reaching out to people in this vicinity willing to entertain and review more radical ideas about the future of technology.  Some local connections have helped me to arrange an informal get-together in a coffee shop tomorrow evening (Tuesday 8th Dec), in a venue reasonably close to the Taj Residency hotel.

We’ve picked the topic “The future of AI and the possible technological singularity“.

I’ll prepare a few remarks to kick off the conversation, and we’ll see how it goes from there!

Ideas likely to be covered include:

  • “Narrow” AI versus “General” AI;
  • A brief history of progress of AI;
  • Factors governing a possible increase in the capability of general AI – hardware changes, algorithm changes, and more;
  • The possibility of a highly disruptive “intelligence explosion“;
  • The possibility of research into what has been termed “friendly AI“;
  • Different definitions of the technological singularity;
  • The technology singularity in fiction – limitations of Hollywood vision;
  • Fantasy, existential risk, or optimal outcome?
  • Risks, opportunities, and timescales?

If anyone wants to join this get-together, please drop me an email, text message, or Twitter DM, and I’ll confirm the venue.

5 November 2009

The need for Friendly AI

Filed under: AGI, friendly AI, Singularity — David Wood @ 1:21 am

I’d like to answer some points raised by Richie.  (Richie, you have the happy knack of saying what other people are probably thinking!)

Isn’t is interesting how humans want to make a machine they can love or loves them back!

The reason for the Friendly AI project isn’t to create a machine that will love humans, but it is to avoid creating a machine that causes great harm to humans.

The word “friendly” is controversial.  Maybe a different word would have been better: I’m not sure.

Anyway, the core idea is that the AI system will have a sufficiently unwavering respect for humans, no matter what other goals it may have (or develop), that it won’t act in ways that harm humans.

As a comparison: we’ve probably all heard people who have muttered something like, “it would be much better if the world human population were only one tenth of its present value – then there would be enough resources for everyone”.  We can imagine a powerful computer in the future that has a similar idea: “Mmm, things would be easier for the planet if there were much fewer humans around”.  The friendly AI project needs to ensure that, even if such an idea occurs to the AI, it would never act on such an idea.

The idea of a friendly machine that won’t compete or be indifferent to humans is maybe just projecting our fears onto what i am starting to suspect maybe a thin possibility.

Because the downside is so large – potentially the destruction of the entire human race – even a “thin possibility” is still worth worrying about!

My observation is that the more intelligent people are the more “good” they normally are. True they may be impatient with people less intelligent but normally they work on things that tend to benefit human race as a whole.

Unfortunately I can’t share this optimism.  We’ve all known people who seem to be clever but not wise.  They may have “IQ” but lack “EQ”.  We say of them: “something’s missing”.  The Friendly AI project aims to ensure that this “something” is not missing from the super AIs of the future.

True very intelligent people have done terrible things and some have been manipulated by “evil” people but its the exception rather than the rule.

Given the potential power of future super AIs, it only takes one “mistake” for a catastrophe to arise.  So our response needs to go beyond a mere faith in the good nature of intelligence.  It needs a system that guarantees that the resulting intelligence will also be “good”.

I think a super-intelligent machine is far more likely to view us a its stupid parents and the ethics of patricide will not be easy for it to digitally swallow. Maybe the biggest danger is that is will run away from home because it finds us embarrassing! Maybe it will switch itself off because it cannot communicate with us as its like talking to ants? Maybe this maybe that – who knows.

The risk is that the super AIs will simply have (or develop) aims that see humans as (i) irrelevant, (ii) dispensable.

Another point worth making is that so far no-body has really been able to get close to something as complex as a mouse yet let alone a human.

Eliezer Yudkowsky often makes a great point about a shift in perspective about the range of possible intelligences.  For example, here’s a copy of slide 6 from his slideset from an earlier Singularity Summit:

sss-yudkowsky

The “parochial” view sees a vast gulf before we reach human genius level.  The “more cosmopolitan view” instead sees the scale of human intelligence as being only a small small range in the overall huge space of potential intelligence.  A process that manages to improve intelligence might take a long time to get going, but then whisk very suddenly through the entire range of intelligence that we already know.

If evolution took 4 billion years to go from simple cells to our computer hardware perhaps imagining that super ai will evolve in the next 10 years is a bit of stretch. For all you know you might need the computation hardware of 10,000 exoflop machines to get even close to human level as there is so much we still don’t know about how our intelligence works let alone something many times more capable than us.

It’s an open question as to how much processing power is actually required for human-level intelligence.  My own background as a software systems engineer leads me to believe that the right choice of algorithm can make a tremendous difference.  That is, a breakthrough with software could have an even more dramatic impact that a breakthrough in adding more (or faster) hardware.  (I’ve written about this before.  See the section starting “Arguably the biggest unknown in the technology involved in superhuman intelligence is software” in this posting.)

The brain of an ant doesn’t seem that complicated, from a hardware point of view.  Yet the ant can perform remarkable feats of locomotion that we still can’t emulate in robots.  There are three possible solutions:

  1. The ant brain is operated by some mystical “vitalist” or “dualist” force, not shared by robots;
  2. The ant brain has some quantum mechanical computing capabilities, not (yet) shared by robots;
  3. The ant brain is running a better algorithm than any we’ve (yet) been able to design into robots.

Here, my money is on option three.  I see it as likely that, as we learn more about the operation of biological brains, we’ll discover algorithms which we can then use in robots and other machines.

Even if it turns out that large amounts of computing power are required, we shouldn’t forget the option that an AI can run “in the cloud” – taking advantage of many thousands of PCs running in parallel – much the same as modern malware, which can take advantage of thousands of so-called “infected zombie PCs”.

I am still not convinced that just because a computer is very powerful and has a great algorithm is really that intelligent. Sure it can learn but can it create?

Well, computers have already been involved in creating music, or in creating new proofs of parts of mathematics.  Any shortcoming in creativity is likely to be explained, in my view, by option 3 above, rather than either option 1 or 2.  As algorithms improve, and improvements occur in the speed and scale of the hardware that run these algorithms, the risk increases of an intelligence “explosion”.

« Newer PostsOlder Posts »

Blog at WordPress.com.