dw2

3 June 2012

Super-technology and a possible renaissance of religion

Filed under: death, disruption, Humanity Plus, rejuveneering, religion, Singularity, UKH+ — David Wood @ 11:02 pm

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke

Imagine that the human race avoids self-destruction and continues on the path of increased mastery of technology. Imagine that, as seems credible some time in the future, humans will eventually gain the ability to keep everyone alive indefinitely, in an environment of great abundance, variety, and  intrinsic interest.

That paradise may be a fine outcome for our descendants, but unless the pace of technology improvement becomes remarkably rapid, it seems to have little direct impact on our own lives. Or does it?

It may depend on exactly how much power our god-like descendants eventually acquire.  For example, here are two of the points from a radical vision of the future known as the Ten cosmist convictions:

  • 5) We will develop spacetime engineering and scientific “future magic” much beyond our current understanding and imagination.
  • 6) Spacetime engineering and future magic will permit achieving, by scientific means, most of the promises of religions — and many amazing things that no human religion ever dreamed. Eventually we will be able to resurrect the dead by “copying them to the future”.

Whoa! “Resurrect the dead”, by “copying them to the future”. How might that work?

In part, by collecting enormous amount of data about the past – reconstructing information from numerous sources. It’s similar to collecting data about far-distant stars using a very large array of radio telescopes. And in part, by re-embodying that data in a new environment, similar to copying running software onto a new computer, giving it a new lease of life.

Lots of questions can be asked about the details:

  • Can sufficient data really be gathered in the future, in the face of all the degradation commonly called “the second law of thermodynamics”, that would allow a sufficiently high-fidelity version of me (or anyone else) to be re-created?
  • If a future super-human collected lots of data about me and managed to get an embodiment of that data running on some future super-computer, would that really amount to resurrecting me, as opposed to creating a copy of me?

I don’t think anyone can confident about answers to such questions. But it’s at least conceivable that remarkably advanced technology of the future may allow positive answers.

In other words, it’s at least conceivable that our descendants will have the god-like ability to recreate us in the future, giving us an unexpected prospect for immortality.

This makes sense of the remark by radical futurist and singularitarian Ray Kurzweil at the end of the film “Transcendent Man“:

Does God exist? Well I would say, not yet

Other radical futurists quibble over the “not yet” caveat. In his recent essay “Yes, I am a believer“, Giulio Prisco takes the discussion one stage further:

Gods will exist in the future, and they may be able to affect their past — our present — by means of spacetime engineering. Probably other civilizations out there already attained God-like powers.

Giulio notes that even the celebrated critic of theism, Richard Dawkins, gives some support to this line of thinking.  For example, here’s an excerpt from a 2011 New York Times interview, in which Dawkins discusses an essay written by theoretic physicist Freeman Dyson:

In one essay, Professor Dyson casts millions of speculative years into the future. Our galaxy is dying and humans have evolved into something like bolts of superpowerful intelligent and moral energy.

Doesn’t that description sound an awful lot like God?

“Certainly,” Professor Dawkins replies. “It’s highly plausible that in the universe there are God-like creatures.”

He raises his hand, just in case a reader thinks he’s gone around a religious bend. “It’s very important to understand that these Gods came into being by an explicable scientific progression of incremental evolution.”

Could they be immortal? The professor shrugs.

“Probably not.” He smiles and adds, “But I wouldn’t want to be too dogmatic about that.”

As Giulio points out, Dawkins develops a similar line of argument in part of his book “The God Delusion”:

Whether we ever get to know them or not, there are very probably alien civilizations that are superhuman, to the point of being god-like in ways that exceed anything a theologian could possibly imagine. Their technical achievements would seem as supernatural to us as ours would seem to a Dark Age peasant transported to the twenty-first century…

In what sense, then, would the most advanced SETI aliens not be gods? In what sense would they be superhuman but not supernatural? In a very important sense, which goes to the heart of this book. The crucial difference between gods and god-like extraterrestrials lies not in their properties but in their provenance. Entities that are complex enough to be intelligent are products of an evolutionary process. No matter how god-like they may seem when we encounter them, they didn’t start that way…

Giulio seems more interested in the properties than the provenance. The fact that these entities have god-like powers prompts him to proclaim “Yes, I am a believer“.  He gives another reason in support of that proclamation: In contrast to the views of so-called militant atheists, Giulio is “persuaded that religion can be a powerful and positive force”.

Giulio sees this “powerful and positive force” as applying to him personally as well as to groups in general:

“In my beliefs I find hope, happiness, meaning, the strength to get through the night, and a powerful sense of wonder at our future adventures out there in the universe, which gives me also the drive to try to be a better person here-and-now on this little planet and make it a little better for future generations”.

More controversially, Giulio has taken to describing himself (e.g. on his Facebook page) as a “Christian”. Referring back to his essay, and to the ensuing online discussion:

Religion can, and should, be based on mutual tolerance, love and compassion. Jesus said: “love thy neighbor as thyself,” and added: “let he who is without sin, cast the first stone”…

This is the important part of his teachings in my opinion. Christian theology is interesting, but I think it should be reformulated for our times…

Was Jesus the Son of God? I don’t think this is a central issue. He certainly was, in the sense that we all are, and he may have been one of those persons in tune with the universe, more in tune with the universe than the rest of us, able to glimpse at veiled realities beyond our senses.

I’ve known Giulio for several years, from various Humanity+ and Singularity meetings we’ve both attended – dating back to “Transvision 2006” in Helsinki. I respect him as a very capable thinker, and I take his views seriously. His recent “Yes, I am a believer” article has stirred up a hornets’ nest of online criticism.

Accordingly, I was very pleased that Giulio accepted my invitation to come to London to speak at a London Futurist / Humanity+ UK meeting on Saturday 14th July: “Transhumanist Religions 2.0: New Cosmist religion and spirituality for our boundless future (and our troubled present)”. For all kinds of reason, this discussion deserves a wider airing.

First, I share the view that religious sentiments can provide cohesion and energy to propel individuals and groups to undertake enormously difficult projects (such as the project to avoid the self-destruction of the human race, or any drastic decline in the quality of global civilisation).  The best analysis I’ve read of this point is in the book “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society” by David Sloan Wilson.  As I’ve written previously:

This book has sweeping scope, but makes its case very well.  The case is that religion has in general survived inasmuch as it helped groups of people to achieve greater cohesion and thereby acquire greater fitness compared to other groups of people.  This kind of religion has practical effect, independent of whether or not its belief system corresponds to factual reality.  (It can hardly be denied that, in most cases, the belief system does not correspond to factual reality.)

The book has some great examples – from the religions in hunter-gatherer societies, which contain a powerful emphasis on sharing out scarce resources completely equitably, through examples of religions in more complex societies.  The chapter on John Calvin was eye-opening (describing how his belief system brought stability and prosperity to Geneva) – as were the sections on the comparative evolutionary successes of Judaism and early Christianity.  But perhaps the section on the Balinese water-irrigation religion is the most fascinating of the lot.

Of course, there are some other theories for why religion exists (and is so widespread), and this book gives credit to these theories in appropriate places.  However, this pro-group selection explanation has never before been set out so carefully and credibly, and I think it’s no longer possible to deny that it plays a key role.

The discussion makes it crystal clear why many religious groups tend to treat outsiders so badly (despite treating insiders so well).  It also provides a fascinating perspective on the whole topic of “forgiveness”.  Finally, the central theme of “group selection” is given a convincing defence.

But second, there’s no doubt that religion can fit blinkers over people’s thinking abilities, and prevent them from weighing up arguments dispassionately. Whenever people talk about the Singularity movement as having the shape of a religion – with Ray Kurzweil as a kind of infallible prophet – I shudder. But we needn’t lurch to that extreme. We should be able to maintain the discipline of rigorous independent thinking within a technologically-informed renaissance of positive religious sentiment.

Third, if the universe really does have beings with God-like powers, what attitude should we adopt towards these beings? Should we be seeking in some way to worship them, or placate them, or influence them? It depends on whether these beings are able to influence human history, here and now, or whether they are instead restricted (by raw facts of space and time that even God-like beings have to respect) to observing us and (possibly) copying us into the future.

Personally my bet is on the latter choice. For example, I’m not convinced by people who claim evidence to the contrary. And if these beings did have the ability to intervene in human history, but have failed to do so, it would be evidence of them having scant interest in widespread intense human suffering. They would hardly be super-beings.

In that case, the focus of our effort should remain squarely on building the right conditions for super-technology to benefit humanity as a whole (this is the project I call “Inner Humanity+“), rather than on somehow seeking to attract the future attention of these God-like beings. But no doubt others will have different views!

15 October 2010

Radically improving nature

Filed under: death, evolution, UKH+ — David Wood @ 10:50 pm

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man – George Bernard Shaw

Changing the world is ambitious.  Changing nature is even more ambitious.

After all, nature is the output of countless generations of refinement by natural selection.  Evolution has found many wonderful solutions.  But natural selection generally only finds local optima.  As I’ve written on a previous occasion:

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

And as I covered in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind”:

The basic claim of the book is that many aspects of the human mind operate in clumsy and suboptimal ways – ways which betray the haphazard and often flawed evolutionary history of the mind….

The framework is, to me, both convincing and illuminating.  It provides a battery of evidence relevant to what might be called “The Nature Delusion” – the pervasive yet often unspoken belief that things crafted by nature are inevitably optimal and incapable of serious improvement.

For these reasons, I applaud thoughtful attempts to improve human nature – whether by education, meditation, diet and smart drugs, silicon co-processors for our biological brains, genetic re-engineering, and so on.  With sufficient overall understanding, we can use the best outputs of human thought to create even better humans.

But what about the rest of nature?  If we can consider creating better humans, what about creating better animals? If the technology of the near future can add 50 points, or more, to our human IQs, could we consider applying similar technological enhancements to dolphins, dogs, parrots, and so on?

There are various motivations to considering this question.  First, there are people who deeply love their pets, and who might wish to enhance the capabilities of their pets, in a manner akin to enhancing the capabilities of their children.  Someone might wonder, if my dog could speak to me, what would it say?

In a way, the experiments to teach chimps sign language already take steps down this direction.  (Some chimps that learned sign language seem in turn to have taught elements of it to their own children.)

A different motivation to consider altering animal nature is the sheer amount of horrific pain and trauma throughout the animal kingdom.  Truly is “nature, red in tooth and claw“.

In his essay “The end of suffering“, British philosopher David Pearce quotes Richard Dawkins from the 1995 book River Out of Eden: A Darwinian View of Life:

During the minute it takes me to compose this sentence, thousands of animals are being eaten alive; others are running for their lives, whimpering with fear; others are being slowly devoured from within by rasping parasites; thousands of all kinds are dying from starvation, thirst and disease. It must be so. If there is ever a time of plenty, this very fact will automatically lead to an increase in population until the natural state of starvation and misery is restored.

But Pearce takes issue with Dawkins:

“It must be so.” Is Richard Dawkins right? Are the cruelties of the food chain an inescapable fact of Nature: no more changeable than, say, Planck’s constant or the Second Law of Thermodynamics? The Transhumanist Declaration expresses our commitment to the “well-being of all sentience”. Yet do these words express merely a pious hope – or an engineering challenge?

My own recent work involves exploring some of the practical steps entailed by compassionate ecosystem redesign – cross-species immunocontraception, genomic rewrites, cultured meat, neurochips, global surveillance and wildlife tracking technologies, and the use of nanorobots for marine ecosystems. Until this century, most conceivable interventions to mitigate the horrors of Nature “red in tooth and claw” would plausibly do more harm than good. Rescue a herbivore [“prey”] and a carnivore [“predator”] starves. And if, for example, we rescue wild elephants dying from hunger or thirst, the resultant population explosion would lead to habitat degradation, Malthusian catastrophe and thus even greater misery. Certainly, the computational power needed to micromanage the ecosystem of a medium-sized wildlife park would be huge by today’s standards. But recall that Nature supports only half a dozen or so “trophic levels”; and only a handful of “keystone predators” in any given habitat. Creating a truly cruelty-free living world may cost several trillion dollars or more. But the problem is computationally tractable within this century – if we acknowledge that wild animal suffering matters.

David’s fan page on Facebook boldly includes the forecast:

“I predict we will abolish suffering throughout the living world”

Unreasonable? Probably. Scientifically credible? Perhaps. Noble? Definitely. Radical? This is about as radical as it gets. Thoughtful? Read David’s own writings and make up your own mind.

Alternatively, if you’re in or nearby London, come along to this month’s UKH+ meeting (tomorrow, Saturday 16th October), where David will be the main speaker.  He wrote the following words to introduce what he’ll be talking about:

The Transhumanist Declaration advocates “the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.” Yet is “the well-being of all sentience” serious science – or just utopian dreaming? What does such a commitment entail? On what kind of realistic timeframe might we command enough computational power to police an entire ecosystem?

In this talk, the speaker wants to review recent progress in understanding the neurobiology of pleasure, pain and our core emotions. Can mastery of our reward circuitry ever deliver socially responsible, intelligent bliss rather than crude wireheading? He also wants to examine and respond to criticisms of the abolitionist project that have been levelled over the past decade – and set out the biggest challenges, as he sees them, to the prospect of a totally cruelty-free world.

10 October 2010

Call for speakers: Humanity+ UK2011

Filed under: Events, Humanity Plus, UKH+ — David Wood @ 2:18 pm

Although I haven’t allocated much time over the last few months to organising Humanity+ activities, I still assist the organisation on an occasional basis.

Earlier today, I issued a “call for speakers” for the January 2011 Humanity+ UK conference that will be taking place on Saturday 29 January 2011, in London’s Conway Hall.

Here’s a summary of the call:

Submissions are requested for talks lasting no more than 20 minutes on the general theme of Making a human difference. Submissions should address one or more of the follow sub-themes:

  1. Technology that enhances humans
  2. Existential risks: the biggest human difference
  3. Citizen activism in support of Humanity+
  4. Humanity vs. Humanity+: criticisms and renewal
  5. Roadmapping the new human future.

Submissions need not be lengthy – around the equivalent of one page of A4 material should be sufficient. They should cover:

  • Proposed title of the talk, and which of the above sub-themes apply to it
  • Brief description of the talk
  • Brief description of the speaker
  • An explanation of why the presentation will provide value to the expected audience.

The 20 minute limit on the length of presentations is intended to ensure that speakers focus on communicating their most important messages. It will also allow a larger number of speakers (and, hence, a larger number of points of view to be considered during the day).

A small number of speakers will also be invited to take part in panel Q&A discussions. These will be decided nearer the time of the conference.

Speaker submissions should be emailed as soon as possible to humanityplusuk AT gmail DOT com.

Speaker slots will be allocated as soon as good submissions are received, and announced on the conference blog. The call for submissions will be closed once there are no available speaking slots left.

Note: at this conference, all speakers will be required to provide slides (e.g. PowerPoint) to accompany their presentation. Speakers who fail to provide their slides to the organisers at least 48 hours before the start of the conference will be removed from the programme.

The organisers also regret that no speaker expenses, fees, or honoraria can be paid. However, speakers will receive free registration for the conference.

Footnote: For background, here’s the site for the corresponding 2010 conference, which attracted an audience of just under 200 people.

11 September 2010

No escape from technology

Filed under: books, evolution, Kurzweil, UKH+ — David Wood @ 1:51 am

We can never escape the bio-technological nexus and get “back to nature” – because we have never lived in nature.

That sentence, from the final chapter of Timothy Taylor’s “The Artificial Ape: How technology changed the course of human evolution“, sums up one of my key takeaways from this fine book.

It’s a book that’s not afraid to criticise giants.  Aspects of Charles Darwin’s thinking are examined and found wanting.  Modern day technology visionary Ray Kurzweil also comes under criticism:

The claims of Ray Kurzweil (that we are approaching a critical moment when biology will be overtaken by artificial constructs) … lack a critical historical – and prehistoric – perspective…

Kurzweil argues that the age of machines is upon us …  and that technology is reaching a point where it can innovate itself, producing ever more complex forms of artificial intelligence.  My argument in this book is that, scary or not, none of this is new.  Not only have we invented technology, from the stone tools to the wheeled wagon, from spectacles to genetic engineering, but that technology, within a framework of some 2 to 3 million years, has, physically and mentally, made us.

Taylor’s book portrays the emergence of humanity as a grand puzzle.  From a narrow evolutionary perspective, humans should not have come into existence.  Our heads are too large. In many cases, they’re too large to pass through the narrow gap in their mother’s pelvis.  Theory suggests, and fossils confirm, that the prehistoric change from walking on all fours to walking upright had the effect of narrowing this gap in the pelvis.  The resulting evolutionary pressures should have resulted in smaller brains.  Yet, after several eons, the brain, instead, became larger and larger.

That’s just the start of the paradox.  The human baby is astonishingly vulnerable.  Worse, it makes its mother increasingly vulnerable too.  How could “survival of the fittest” select this ridiculously unfit outcome?

Of course, a larger brain has survival upsides as well as survival downsides.  It enables greater sociality, and the creation of sophisticated tools, including weapons.  But Taylor marshalls evidence that suggests that the first use of tools by pre-humans long pre-dated the growth in head size.  This leads to the suggestion that two tools, in particular, played vital roles in enabling the emergence of the larger brain:

  • The invention of a slings, made from fur, that enabled mothers to carry their infants hands-free
  • The invention of cooking, with fire, that made it easier for nourishment to be quickly obtained from food.

To briefly elaborate the second point: walking upright means the digestive gut becomes compressed.  It becomes shorter.  There’s less time for nourishment to be extracted from food.  Moreover, a larger head increases the requirements for fast delivery of nourishment.  Again, from a narrow evolutionary point of view, the emergence of big-brained humans makes little sense.  But cooking comes to the rescue.  Cooking, along with the child-carrying sling, are two examples of technology that enable the emergence of humans.

The resulting creatures – us – are weaker in a pure biological sense that our evolutionary forebears.  Without our technological aides, we would fare poorly in any contest of survival with other apes.  It is only the combination of technology-plus-nature that makes us stronger.

We’re used to thinking that the development of tools took place in parallel with increasing pre-human intelligence.  Taylor’s argument is that, in a significant way, the former preceded the latter.  Without the technology, the pre-human brain could not expand.

The book uses this kind of thinking to address various other puzzles:

  • For example, the technology-impoverished natives from the tip of South America that Darwin met on his voyage of discovery on the Beagle, had eyesight that was far better than even the keenest eyed sailor on the ship.  Technological progress went hand-in-hand with a weakening of biological power.
  • Taylor considers the case of the aborigines of Tasmania, who were technologically backward compared to those of mainland Australia: they lacked all clothing, and apparently could not make fire for themselves.  The archeological record indicates that the Tasmanian aborigines actually lost the use of various technologies over the course of several millenia.  Taylor reaches a different conclusion from popular writer Jared Diamond, who seems to take it for granted that this loss of technology made the aborigines weaker.  Taylor suggests that, in many ways, these aborigines became stronger and fitter, in their given environment, as they abandoned their clothing and their fishing tools.

There are many other examples – but I’ll leave it to you to read the book to find out more.  The book also has some fascinating examples of ancient tools.

I think that Taylor’s modifications of Darwin’s ideas are probably right.  What of his modifications of Kurzweil’s ideas?  Is the technological spurt of the present day really “nothing new”?  Well, yes and no.  I believe Kurzweil is correct to point out that the kinds of changes that are likely to be enabled by technology in the relatively near future – perhaps in the lifetime of many people who are already alive – are qualitatively different from anything that has gone before:

  • Technology might extend our lifespans, not just by a percentage, but by orders of magnitude (perhaps indefinitely)
  • Technology might create artificial intelligences that are orders of magnitude more powerful than any intelligence that has existed on this planet so far.

As I’ve already mentioned in my previous blogpost – which I wrote before starting to read Taylor’s book – Timothy Taylor is the guest speaker at the September meeting of the UK chapter of Humanity+.  People who attend will have the chance to hear more details of these provocative theories, and to query them direct with the author.  There will also be an opportunity to purchase signed copies of his book.  I hope to see some of you there!

I’ll give the last words to Dr Taylor:

Technology, especially the baby-carrying sling, allowed us to push back our biological limits, trading in our physical strength for an increasingly retained infantile early helplessness that allowed our brains to expand, forming themselves under increasingly complex artificial conditions…  In terms of brain growth, the high-water mark was passed some 40,000 years ago.  The pressure on that organ has been off ever since we started outsourcing intelligence in the form of external symbolic storage.  That is now so sophisticated through the new world information networking systems that what will emerge in future may no longer be controlled by our own volition…

[Technology] could also destroy our planet.  But there is no back-to-nature solution.  There never has been for the artificial ape.

29 August 2010

Understanding humans better by understanding evolution better

Filed under: collaboration, deception, evolution, RSA, UKH+ — David Wood @ 5:54 am

Many aspects of human life that at first seem weird and hard to explain can make a lot more sense once you see them from the viewpoint of evolution.

It was Richard Dawkins’ book “The Selfish Gene” which first led me to that conclusion, whilst I was still at university.  After “The Selfish Gene”, I read “Sociobiology: the new synthesis“, by E.O. Wilson, which gave other examples.  I realised it was no longer necessary to refer to concepts such as “innate wickedness” or “original sin” to explain why people often did daft things.  Instead, people do things because (in part) of underlying behavioural patterns which tended to make their ancestors more likely to leave successful offspring.

In short, you can deepen your understanding of  humans if you understand evolution.  On the whole, attempts to get humans to change their behaviour will be more likely to succeed if they are grounded in an understanding of the real factors that led humans to tend to behave as they do.

What’s more, you can understand humans better if you understand evolution better.

In a moment, I’ll come to some interesting new ideas about the role played by technology in evolution.  But first, I’ll mention two other ways in which an improved understanding of evolution sheds richer light on the human condition.

1. Evolution often results in sub-optimal solutions

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

I’ve written on this subject before, in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind” – so I won’t say much more about that particular topic right now.  But I can’t resist including a link to a fascinating video in which Richard Dawkins demonstrates the absurdly non-optimal route taken by the laryngeal nerve of the giraffe.  As Dawkins says in the video, this nerve “is a beautiful example of historical legacy, as opposed to design”.  If you haven’t seen this clip before, it’s well worth watching, and thinking about the implications.

2. Evolution can operate at multiple levels

For a full understanding of evolution, you have to realise it can operate at multiple levels:

  • At the level of individual genes
  • At the level of individual organisms
  • At the level of groups of cooperating organisms.

At each level, there are behaviours which exist because they made it more likely for an entity (at that level) to leave descendants.  For example, groups of animals tend to survive as a group, if individuals within that group are willing, from time to time, to sacrifice themselves for the sake of the group.

The notion of group selection is, however, controversial among evolutionary theorists.  Part of the merit of books such as The Selfish Gene was that it showed how altruistic behaviour could be explained, in at least some circumstances, by looking at the point of survival of individual genes.  If individual A sacrifices himself for the sake of individuals B and C within the same group, it may well be that B and C carry many of the same genes as individual A.  This analysis seems to deal with the major theoretical obstacle to the idea of group selection, which is as follows:

  • If individuals A1, A2, A3,… all have an instinct to sacrifice themselves for the sake of their wider group, it may well mean, other things being equal, that this group is initially more resilient than competing groups
  • However, an individual A4 who is individually selfish, within that group, will get the benefit of the success of the group, and the benefit of individual survival
  • So, over time, the group will tend to contain more individuals like the “free-rider” A4, and fewer like A1, A2, and A3
  • Therefore the group will degenerate into selfish behaviour … and this shows that the notion of “group selection” is flawed.

Nevertheless, I’ve been persuaded by writer David Sloan Wilson that the notion of group selection can still apply.  He gives an easy-to-read account of his ideas in his wide-ranging book “Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives“.  In summary:

  • Group selection can apply, provided the group also has mechanisms to reduce free-riding behaviour by individuals
  • For example, people in the group might have strong instincts to condemn and punish people who try to take excess advantage of the generosity of others
  • So long as these mechanisms keep the prevalence of free-riding below a certain threshold, a group can reach a stable situation in which the altruism of the majority continues to benefit the group as a whole.

(To be clear: this kind of altruism generally looks favourably only at others within the same group.  People who are outside your group won’t benefit from it.  An injunction such as “love your neighbour as yourself” applied in practice only to people within your group – not to people outside it.)

To my mind, this makes sense of a great deal of the mental gymnastics that we can observe: people combine elements of surreptitiously trying to benefit themselves (and their own families) whilst seeking to appearing to the group as a whole as being “good citizens”.  In turn, we are adept at seeing duplicity and hypocrisy in others.  There’s been a long “arms race” in which brains have been selected that are better at playing both sides of this game.

Incidentally, for another book that takes an entertaining and audacious “big picture” view of evolution and group selection, see the barn-storming “The Lucifer Principle: A Scientific Expedition into the Forces of History” by Howard Bloom.

3. The role of technology in evolution

At first sight, technology has little to do with evolution.  Evolution occurred in bygone times, whilst technology is a modern development – right?

Not true. First, evolution is very much a present-day phenomenon (as well as something that has been at work throughout the whole history of life).  Diseases evolve rapidly, under pressures of different regimes of anti-bacterial cocktails.  And there is evidence that biological evolution still occurs for humans.  A 2009 article in Time magazine was entitled “Darwin Lives! Modern Humans Are Still Evolving“.  Here’s a brief extract:

One study, published in PNAS in 2007 and led by John Hawks, an anthropologist at the University of Wisconsin at Madison, found that some 1,800 human gene variations had become widespread in recent generations because of their modern-day evolutionary benefits. Among those genetic changes, discovered by examining more than 3 million DNA variants in 269 individuals: mutations that allow people to digest milk or resist malaria and others that govern brain development.

Second, technology is itself an ancient phenomenon – including creative use of sticks and stones.  Benefits of very early human use of sticks and stones included fire, weapons, and clothing.  What’s more, the advantages of use of tools allowed a strange side-effect in human genetic evolution: as we became technologically stronger, we also became biologically weaker.  The Time magazine article mentioned above goes on to state the following:

According to anthropologist Peter McAllister, author of “Manthropology: the Science of Inadequate Modern Man“, the contemporary male has evolved, at least physically, into “the sorriest cohort of masculine Homo sapiens to ever walk the planet.” Thanks to genetic differences, an average Neanderthal woman, McAllister notes, could have whupped Arnold Schwarzenegger at his muscular peak in an arm-wrestling match. And prehistoric Australian Aborigines, who typically built up great strength in their joints and muscles through childhood and adolescence, could have easily beat Usain Bolt in a 100-m dash.

Timothy Taylor, Reader in Archaeology at the University of Bradford and editor-in-chief of the Journal of World Prehistory, tackles this same topic in his recent book “The Artificial Ape: How Technology Changed the Course of Human Evolution“.

Amazon.com describes this book as following:

A breakthrough theory that tools and technology are the real drivers of human evolution.

Although humans are one of the great apes, along with chimpanzees, gorillas, and orangutans, we are remarkably different from them. Unlike our cousins who subsist on raw food, spend their days and nights outdoors, and wear a thick coat of hair, humans are entirely dependent on artificial things, such as clothing, shelter, and the use of tools, and would die in nature without them. Yet, despite our status as the weakest ape, we are the masters of this planet. Given these inherent deficits, how did humans come out on top?

In this fascinating new account of our origins, leading archaeologist Timothy Taylor proposes a new way of thinking about human evolution through our relationship with objects. Drawing on the latest fossil evidence, Taylor argues that at each step of our species’ development, humans made choices that caused us to assume greater control of our evolution. Our appropriation of objects allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.

Weaving together lively discussions of major discoveries of human skeletons and artifacts with a reexamination of Darwin’s theory of evolution, Taylor takes us on an exciting and challenging journey that begins to answer the fundamental question about our existence: what makes humans unique, and what does that mean for our future?

In an interview in the New Scientist, Timothy Taylor gives more details of his ideas:

Upright female hominins walking the savannah had a real problem: their babies couldn’t cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female – higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn’t survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don’t last: things made from sinew, wood, leather and grasses…

Once you have slings to carry babies, you have broken a glass ceiling – it doesn’t matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb – they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment…

I’ve ordered Taylor’s book from Amazon and I expect it to be waiting for me at my home in the UK once I return from my current trip in Asia.  I’m also looking forward to hosting a discussion meeting on Saturday 11th Sept under the auspices of Humanity+ UK in London, where Timothy Taylor himself will be the main speaker. People on Facebook can register their interest in this meeting by RSVPing here.  There’s no charge to attend.

Another option to see Timothy Taylor lecture in person – for those able to spare time in the middle of the day on a Thursday (9th Sept) – will be at the RSA.  I expect there will be good discussion at both events, but the session at H+UK is longer (two hours, as opposed to just one at the RSA), and I expect more questions there about matters such as the likely role of technology radically re-shaping the future development of humans.

Footnote: of course, the fact that evolution guided our ancestors to behave in certain ways is no reason for us to want to continue to behave in these ways.  But understanding the former is, in my view, very useful background knowledge for being to devise practical measures to change ourselves.

25 April 2010

Practical magic

Filed under: communications, Events, Humanity Plus, magic, marketing, UKH+ — David Wood @ 10:26 pm

I won’t reveal the content of the tricks.  That would be unfair on the performer.

Our dining group at Soho’s Little Italy restaurant had been pleasantly surprised by the unannounced entrance of a lady magician, before the orders for coffee were taken.  Where were we from, she asked.  Answers followed, hesitatingly: Belgium, Germany, Sweden, New York, London…

The atmosphere changed from guarded politeness to unguarded amazement as the magician blazed her way through some fast-paced sleight of hand with newspapers, water, money, ribbons, and playing cards.  Many of our group of hardened rationalists and technophiles were gasping with astonishment.  How did she do that?

It was a fitting end to a day that had seen a fair share of other kinds of magic.

Despite my nervous forebodings from earlier in the week, the Humanity+ UK2010 event had seen a 100% turn out of speakers, ran (near enough) to time, and covered a vast range of intriguing ideas about forthcoming new technology and the enhancement of humanity.  An audience of approaching 200 people in London’s Conway Hall seemed to find much to think about, from what they’d heard.  Here’s a brief sample of online feedback so far:

Awesome conference – all your work paid off and then some!

Great conference today #hplusuk : thank you!

Enjoyed H+ event, esp @anderssandberg preso. Learnt about singularity, AI+, wireheads, future shock, SENS, protocells & more

Most enjoyable conference today. Thanks to the organisers and speakers

A few hours literally day dreaming, blown away by human cleverness.  These people should be allowed to talk on prime time on BBC regularly

Humanity+ today was terrific. I particulary enjoyed the talks from Amon Twyman – Expanding perception and transhumanist art, Natasha Vita-More – DIY Enhancement, Aubrey de Grey’s Life Expansion and Rachel Armstrong’s Living Technology

Great talk @davidorban how the #internetofthings could free us to be human again. Couldn’t agree more. #hplusuk

Love David Pearce, a true visionary! #hplusuk

Behind the scenes, a team of volunteers were ensuring that things ran as smoothly as possible – with a very early start in the morning following a late evening the previous day.  In my professional life over the years I’ve often been responsible for major events, such as the Symbian developer events and smartphone shows, where I had visibility of the amount of work required to make an event a success.  But in all these cases, I had a team of events managers working for me – including first-class professionals such as Amy Graller, Jo Butler, Liza Fox, and Alice Kenny, as well as brand managers, PR managers, and so on.  These teams shielded me from a great deal of the underlying drama of managing events.  In contrast, this time, our entire team were volunteers, and there was no alternative to getting our own hands dirty!  Huge amounts of thanks are due to everyone involved in pulling off this piece of magic.

Needless to say, some things fell short of perfection.  I heard mild-mannered grumbles:

  • That there wasn’t enough time for audience Q&A – and that too many of the questions that were raised from the floor were imprecise or unfocused;
  • That the audio from our experimental live streaming from the event was too choppy – due to shortcomings in the Internet connectivity from the event (something that will need to be fixed before I consider holding another similar event there);
  • That some of the presentations had parts that were too academic for some members of the audience, or assumed more background knowledge than people actually possessed;
  • That there should have been more journalists present, hearing material that deserves wide coverage.

The mail list used by the Humanity+ UK organising team is already reflecting on “what went well” and “what could be improved”.  Provisionally, we have in mind a follow-up event early next year.  We’re open for suggestions!  What scale should we have in mind?  What key objectives?

Because I was rushing around on the day, trying to ensure everything was ready for the next phase of the event, I found myself unable to concentrate for long on the presentations themselves.  (I’ll need to watch the videos of the talks, once they’re available.)  However, a few items successfully penetrated my mental fog.  I was particularly struck by descriptions of potential engineering breakthroughs:

This kind of information appeals to the engineer in me.  It’s akin to “practical magic”.

I was also struck by discussions of flawed societal priorities, covering instances where publications give undue prominence to matters of low importance, to the exclusion of more accurate coverage of technological issues.  For example, Nick Bostrom reported, during his talk “Reducing Existential Risks” that there are more scholarly papers on dung beetle reproduction than on the possibilities of human extinction.  And Aubrey de Grey gave examples of sensationalist headlines even in a normally responsible newspaper, for anti-aging news of little intrinsic value, whilst genuinely promising news receives scant coverage.

What is the solution to this kind of broken prioritisation? The discussion among the final speaker panel of the day helped to distill an answer.  The Humanity+ organisation, along with those who support its aims, need to become better at the discipline of marketing. Once we convey our essential messages more effectively, society as a whole should hear and understand what we are saying, and respond positively.  There’s a great art – and great skill – to the practice of communication.

Some people dislike the term “marketing”, as if it’s a swear word.  But I see it as follows.  In general terms, “marketing” for any organisation means:

  • Deciding on a strategic focus – as opposed to a scattergun approach;
  • Understanding how various news items or other pieces of information or activism might be received by people in the wider community;
  • Finding better ways to convey the chosen key messages;
  • Engaging within the wider community – listening more than talking – and learning in the light of that conversation;
  • Repeating the above steps, with increasingly better understanding and better execution.

At 5pm, we had to hurriedly leave the venue, because it was needed for another function starting at 6pm.  It was hard to move everyone outside the main hall, since there were so many intense group discussions happening.  Eventually, some of us started on a 20 minute walk through central London, from Holborn to Soho, for the post-event dinner at Little Italy.  The food was delicious, the waitresses coped well (and with many friendly smiles) with all our many requests, and the conversation was first class.  The magician provided a great interlude.  I left the restaurant, several hours later, with a growing list of suggestions for topics for talks in the normal UKH+ monthly meetings that could bring in a good audience.  Happily, I also have a growing list of names of people who want to provide more active assistance in building an enhanced community of supporters of the aims of Humanity+.

28 March 2010

A video experiment: 20 priorities

Filed under: communications, futurist, Humanity Plus, presentation, UKH+ — David Wood @ 9:38 am


Video: 20 priorities for the coming decade

The video linked above is my attempt to address several different requirements:

  1. To follow up some ideas about the list of priorities I mentioned previously, tentatively named “The Humanity+ Agenda”;
  2. To find an interesting new way to help publicise the forthcoming (April 24th) “Humanity+ UK2010” event;
  3. To experiment with creating videos, to use for communications purposes, as a complement to textual blog posts.

As you can see, it’s based on Powerpoint – a tool I know well.

What I didn’t appreciate about Powerpoint, before, is the fact that you can embed an audio narrative, to playback automatically as the slides and animations progress.  So that’s what I decided to do.

First time round, I tried to ad lib remarks, as I progressed through the slides, but that didn’t work well.  Next, I wrote down an entire script, and read from that.  The result is a bit flat and jaded in places, and there are a few too many verbal fluffs for my liking.  When I try this again, I’ll set aside more time, and make myself re-do the narration for a slide each time I fluff a few words.

I also hit some bugs (and quirks) when using the “Record narration” features of PowerPoint.  Some of these seem to be known features, but not all:

  • A few seconds of the narration often gets truncated from the end of each slide.  The workaround is to wait three seconds after finishing speaking, before advancing to the next slide;
  • The audio quality for the first slide was very crackly every time, not matter what I tried.  The workaround is to insert an extra “dummy” slide at the beginning, and to discard that slide before publishing;
  • There’s a pair of loud audible cracks at the start of each slide.  I don’t know any workaround for that;
  • Some of the timing, during playback, is slightly out of synch with what I recorded: animations on screen sometimes happen a few seconds before the accompanying audio stream is ready for them.

I used authorSTREAM as the site to store the presentation.  They offer the following features:

  • Support for playback of presentations containing audio narration;
  • Support for converting the presentation into video format.

The authorSTREAM service looks promising – I expect to use it again!

Footnote: I’ll update this posting shortly, with a copy of the video embedded, rather than linked.  (I still find video embedding to be a bit of a hit-or-miss process…)

16 March 2010

Practical measures for personal longevity

Filed under: aging, supplement, UKH+, UKTA — David Wood @ 12:06 pm

What steps do you take, to enhance your personal longevity?

That’s a question I still struggle to answer.  I believe that the next few decades will see  spectacular advances in science, technology, society, art, and culture, and I’d very much like to participate in these – in some cases as an observer, and in some cases as an engineer and activist.  Rationally, therefore, I should be taking steps to make it more likely that I will remain alive, fit, and healthy, throughout these coming decades.  But what are these steps?

That’s the topic of the UKH+ (Extrobritannia) meeting that will be taking place in London on the afternoon of Sunday 28th March: “Aging and dietary supplements – correcting some myths“.  The speaker will be Michael Price, who has been carrying out independent research for 30 years into questions of life extension and futurism.  The meeting is described as follows on the Extrobritannia meetings blog:

This talk will review where we are (and aren’t) with respect to understanding aging. It will cover theories of aging, and the (largely failed) promises of gerontologists and immortalists, past and present. It will then make some suggestions for what we can do now – including a discussion of which dietary supplements may work, which may not, and why dietary supplements are generally discredited.

The idea of a “pill to make you live longer” is alluring, and often drums up tabloid headlines.  A Google search for “pill to make you live longer” returns more than 900,000 results.  Some websites look more credible than others.  In addition to pills, these websites often talk about “superfoods”.  For example, the Maximum Life Foundation recently published an article “Seven Superfoods That Will Keep You Young” and listed the following:

  1. Whey Protein
  2. Raw, Organic Eggs
  3. Leafy Greens
  4. Broccoli
  5. Blueberries
  6. Chlorella
  7. Garlic, the “Stinking Rose”

The same article continues:

The Most Important Way to Slow Aging

Do you know what the number one way to slow aging in your body is? If you’re like most people, you don’t.

Most people don’t understand the importance of optimizing their insulin levels, as insulin is without a doubt THE major accelerant of aging. Fortunately, you can go a long way toward keeping your insulin levels healthy by reducing or eliminating grains and sugars from your diet.

This one crucial step, combined with nutritional typing and the inclusion of nature’s anti-aging miracle foods in your diet, can dramatically improve your health and longevity.

It is also crucial to include a comprehensive exercise program as that is another lifestyle choice that will radically improve the sensitivity of your insulin receptors and help to optimize your insulin levels.

Theories about superfoods, pills, and other dietary supplement, depend in turn on theories of the causes of aging.  Some of these theories remain controversial – and I expect Michael will review the latest findings.  These theories include (to quote from Wikipedia, emphasis added):

  • Telomere theory: Telomeres (structures at the ends of chromosomes) have experimentally been shown to shorten with each successive cell division. Shortened telomeres activate a mechanism that prevents further cell multiplication. This may be an important mechanism of ageing in tissues like bone marrow and the arterial lining where active cell division is necessary. Importantly though, mice lacking telomerase enzyme do not show a dramatically reduced lifespan, as the simplest version of this theory would predict;
  • Free-Radical Theory: The idea that free radicals (unstable and highly reactive organic molecules, also named reactive oxygen species or oxidative stress) create damage that gives rise to symptoms we recognize as ageing.

Given the rich variety of different advice, it may be tempting – especially for people who are still in the first few decades of their lives – to take a different approach to hoping for a long life.  This approach is to trust that technological and medical improvements will happen quickly enough to be usefully applicable to you later in your life.  For example, someone in their twenties today can judge it as likely that significant improvements in anti-aging techniques will be widely available before they reach the age of sixty.

After all, life expectancy continues to rise.  Figures released last year by the UK’s Office of National Statistics (PDF) state that:

  • Life expectancy for males in the UK, at birth, was 73.4 years, in 1991-1993;
  • This figure rose to 77.4 in 2006-2008;
  • That’s a 4.0 year increase in life expectancy over that 15 year period.

People can follow the lead of anti-aging researcher Aubrey de Grey and talk about a future “longevity escape velocity” in which the increase in life expectancy over a 15 year period would be at least 15 years.  That’s an attractive vision, and de Grey makes a persuasive argument that it is credible.  What is far less certain, however, is:

  • The future timescale in which such remedies will become available;
  • Any variability in the performance of these future remedies, which might be influenced by the amount of damage our bodies have accumulated in the meantime.

These reservations increase the importance of addressing personal longevity issues sooner rather than later.  I’m reminded of the quotation that is attributed to Theodore Roosevelt:

Old age is like everything else. To make a success of it, you’ve got to start young

Finally, I’ll return to the question posed at the start of this article:

What steps do you take, to enhance your personal longevity?

At present, here’s my answer:

  • Have an annual medical checkup, to detect early warning signs of impending trouble;
  • Take (on doctor’s prescription) a statin pill in the evening, to lower cholesterol;
  • Take a collection of pills in the morning, including ginseng, mutivitamins, garlic, and ginkgo biloba;
  • “5 a day” portions of fruit and vegetables;
  • Pay attention to gum health, by cleaning between teeth as well as the teeth themselves;
  • Keep fit, by walking, and (increasingly) by spending time on the golf course or golf driving range;
  • Avoid cigarettes and excess alcohol;
  • Avoid dangerous sports.

I may have a different answer, after listening to Michael’s talk at the end of the month.

22 February 2010

Scepticism about the future of politics

Filed under: politics, UKH+, UKTA — David Wood @ 10:30 pm

I ought to have realised in advance that the topic for last Saturday’s UKH+ meeting would prove less popular than usual.

The first comment from the audience, once the speaker opened up for questions, said it all:

Frankly, I’m sceptical.

Recently, UKH+ meetings have attracted audiences of 40-70 people each time, for discussions about various aspects of the future of technology.  This time, we only had 20 people in the room.

The topic for this meeting was:

The future of politics. Can politicians prepare society for the major technology challenges ahead?

Another indication of scepticism about the meeting topic came in a tweet which suggested a different flow of causation:

Seems unlikely. Can technologists prepare politicians for the major technology challenges ahead?

Politicians are held in low regard by the public as a whole, and seem to be held in even lower regard by technology-savvy members of the public.  Even the speaker at the meeting, Darren Reynolds (chair of Burnley Liberal Democrats), accepted that politicians generally lag well behind breakthrough technological developments, such as the creation of the Internet.

So what was the point of organising the meeting?  Why should a group that focuses on potential breakthrough consequences of new technology be concerned about interactions with politicians?

Well, like it or not, politicians have a big influence over what happens in society:

  • They put in place regulatory frameworks, such as govern new medical treatments and drugs with human enhancement potential;
  • They allocate central funds in favour of different kinds of research and development;
  • They (sometimes) galvanise public change.

Politicians can on occasion even be persuaded to take good decisions – as in the case which was the subject of discussion last time Darren spoke at a UKH+ meeting, back in April 2008:

  • Reasons to support and improve the Human Fertilisation & Embryology bill.

On Saturday, Darren argued that, even though the party political system is far from perfect, and often drives suboptimal behaviour, it can still achieve good outcomes.  Those of us who desire faster development and wise adoption of new technologies (such as nanotechnology, synthetic biology, robotics, and artificial intelligence) need to become more skilled in interacting with politicians.  What’s more, we have to recognise the emotional aspects of political dialog:

  • It’s insufficient to focus on rational debate about the capabilities of technologies;
  • People make decisions based on their feelings, not just on their rationality;
  • We have to understand the concerns, aspirations, hopes, and fears of different people, and tailor our communications to fit;
  • We also have to understand the power structures within society, and take these into account in our change initiatives too.

That advice made good sense to me, with my background in advocating the merits of various smartphone techologies over the years.  “Politics” – whether involving people who call themselves politicians, or merely as a messy aspect of corporate life – is something we have to learn to deal with.  If we fail to raise techno-progressive issues to the mainstream political agenda, we shouldn’t be surprised if people who are generally techno-conservative or techno-ignorant occupy positions of power in society.  Becoming a skilled influencer is much more than becoming a skilled rationalist.

But there’s another connection between politics and technology. It’s not just that politicians can influence the evolution and adoption of technology.  It’s that technology can enable improvements in how politics are conducted.  Discussion at the meeting raised good points about this connection:

  • The accelerating decline of old-style printed newspapers, and the rise of online media, alters the style of political discussion;
  • Technology could be used to enable more frequent votes, at lower cost, on matters where the public should be consulted;
  • Wider collaboration, such as used in open source software projects, or for Wikipedia, might enable better decisions to be reached;
  • Over time, more and more decisions could be referred to AI systems, to generate recommendations;
  • In due course, we’ll have to decide whether AIs deserve votes (the slogan “one man, one vote” will need re-thinking).

After the meeting finished, I found an interesting website with ideas along some of the same lines.  The Metagovernment project describes itself as follows:

The mission of the Metagovernment project is to support the development and use of Internet tools which enable the members of any community to fully participate in the governance of that community. We are a global group of people working on various projects which further this goal.

We expect governance software to be adopted first in small communities, and then to spread outward with the potential to gradually replace many institutions of representative democracy with a new kind of social organization called collaborative governance.

We conceive a world where every person, without exception, is able to substantively participate in any governance structure in which they have an interest. We envision governance which is not only more open, free, and democratic; but also which is more effective and less fallible than pre-Internet forms of governance…

I haven’t had time to look into Metagovernment more fully, but it’s potentially a good topic for a future meeting on “The future of politics”.

A different approach was (half-jokingly?) suggested by James Clement on Facebook:

I’m not so sure about ” putting choices in the hands of ordinary people…”  I’ll wait for our AI Overlords to save us!

Footnote: Darren Reynolds has been a pro-technology activist since at least 1998, when he was part of the team who who produced the original “Transhumanism Declaration“:

  1. Humanity will be radically changed by technology in the future. We foresee the feasibility of redesigning the human condition, including such parameters as the inevitability of ageing, limitations on human and artificial intellects, unchosen psychology, suffering, and our confinement to the planet earth.
  2. Systematic research should be put into understanding these coming developments and their long-term consequences.
  3. Transhumanists think that by being generally open and embracing of new technology we have a better chance of turning it to our advantage than if we try to ban or prohibit it…

31 January 2010

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

« Newer PostsOlder Posts »

Blog at WordPress.com.