dw2

1 October 2019

“Lifespan” – a book to accelerate the emerging paradigm change in healthcare

Harvard Medical School professor David Sinclair has written a remarkable book that will do for an emerging new paradigm in healthcare what a similarly remarkable book by Oxford University professor Nick Bostrom has been doing for an emerging new paradigm in artificial intelligence.

In both cases, the books act to significantly increase the tempo of the adoption of the new paradigm.

Bostrom’s book, Superintelligence – subtitled Paths, Dangers, Strategies – caught the attention of Stephen Hawking, Bill Gates, Elon Musk, Barack Obama, and many more, who have collectively amplified its message. That message is the need to dramatically increase the priority of research into the safety of systems that contain AGI (artificial general intelligence). AGI will be a significant step up in capability from today’s “narrow” AI (which includes deep learning as well as “good old fashioned” expert systems), and therefore requires a significant step up in capability of safety engineering. In the wake of a wider appreciation of the scale of the threat (and, yes, the opportunity) ahead, funding has been provided for important initiatives such as the Future of Life Institute, OpenAI, and Partnership on AI. Thank goodness!

Sinclair’s book, Lifespan – subtitled Why We Age, and Why We Don’t Have To – is poised to be read, understood, and amplified by a similar group of key influencers of public thinking. In this case, the message is that a transformation is at hand in how we think about illness and health. Rather than a “disease first” approach, what is now possible – and much more desirable – is an “aging first” approach that views aging as the treatable root cause of numerous diseases. In the wake of a wider appreciation of the scale of the opportunity ahead (and, yes, the threat to society if healthcare continues along its current outdated disease-first trajectory), funding is likely to be provided to accelerate research into the aging-first paradigm. Thank goodness!

Bostom’s book drew upon the ideas of earlier writers, including Eliezer Yudkowsky and Ray Kurzweil. It also embodied decades of Bostrom’s own thinking and research into the field.

Sinclair’s book likewise builds upon ideas of earlier writers, including Aubrey de Grey and (again) Ray Kurzweil. Again, it also embodies decades of Sinclair’s own thinking and research into the field.

Both books are occasionally heavy going for the general reader – especially for a general reader who is in a hurry. But both take care to explain their thinking in a step-by-step process. Both contain many human elements in their narrative. Neither books contain the last word on their subject matter – and, indeed, parts will likely prove to be incorrect in the fullness of time. But both perform giant steps forwards for the paradigms they support.

The above remarks about the book Lifespan are part of what I’ll be talking about later today, in Brussels, at an open lunch event to mark the start of this year’s Longevity Month.

Longevity Month is an opportunity to celebrate recent progress, and to anticipate faster progress ahead, for the paradigm shift mentioned above:

  • Rather than studying each chronic disease separately, science should prioritise study of aging as the common underlying cause (and aggravator) of numerous chronic diseases
  • Rather than treating aging as an unalterable “fact of nature” (which, by the way, it isn’t), we should regard aging as an engineering problem which is awaiting an engineering solution.

In my remarks at this event, I’ll also be sharing my overall understanding of how paradigm shifts take place (and the opposition they face):

I’ll run through a simple explanation of the ideas behind the “aging-first” paradigm – a paradigm of regular medical interventions to repair or remove the damage caused at cellular and inter-cellular levels as a by-product of normal human metabolism:

Finally, I’ll be summarising the growing momentum of progress in a number of areas, and suggesting how that momentum has the potential to address the key remaining questions in the field:

In addition to me, four other speakers are scheduled to take part in today’s event:

It should be a great occasion!

15 September 2016

Two cheers for “Technology vs. Humanity”

On Saturday I had the pleasure to host Swiss futurist Gerd Leonhard at a London Futurists event in central London. The meetup was organised in conjunction with the publication of Gerd’s new book, “Technology vs. Humanity”.

tvh-3d-1

This three minute video from his website gives a fast-paced introduction to Gerd’s thinking:

The subtitle of Gerd’s book indicates the emphasis that comes across loud and clear in its pages: “The coming clash between man and machine”. I have mixed feelings about that emphasis. Yes, a clash between humanity and technology is one of the possible scenarios ahead. But it’s by no means set in stone. If we are smart, much better futures lie ahead. These better future see a combination of the best of present-day humanity and the fruits of technological development, to create what I would call a Humanity+ future.

In the Humanity+ future, technology is used to enhance humanity – making us healthier, kinder, smarter, wiser, more compassionate, and more engaged. In contrast, Gerd expects that technology will result in a downgrade of humanity.

The video of Saturday’s London Futurists event records some dialog on exactly that point. If you’ve got a spare 60 minutes, it’s worth watching the video all the way through. (The Q&A starts after 44 minutes.)

You’ll see that Gerd is an engaging, entertaining presenter, with some stunning visuals.

Hip, hip…

Overall, I am happy to give two cheers to Gerd’s new book – two loud cheers.

The first cheer is that it has many fine examples of the accelerating pace of change. For example, chapter three of his book reviews “ten megashifts”. Gerd starts his presentation with the bold claim that “Humanity will change more in the next 20 years than in the previous 300 years”. He may well be right. Related, Gerd makes a strong case that major change can sneak up on people “gradually and then suddenly”. That’s the nature of exponential change.

The second cheer is even louder than the first one: I completely agree with Gerd that we need to carefully consider the pros and cons of adopting technology in greater areas of our lives. He has a brilliant slide in which human’s attitude towards a fast-improving piece of technology changes from “Magic” to “Manic” and then to “Toxic”. To avoid such progressions, Gerd recommends the formation of something akin to a “Humanity Protection Agency”, similar to the “Environmental Protection Agency” that constrains corporations from polluting and despoiling the environment. Gerd emphasises: just because it is possible to digitise aspects of our lives, it doesn’t mean we should digitise these aspects. More efficient doesn’t always mean better. More profit doesn’t always mean better. More experiences doesn’t always mean better – and so on. Instead of rushing ahead blindly, we need what Gerd calls “exponentially increased awareness”. He’s completely right.

So I am ready to say, “Hip, hip…” – but I hold back from the third cheer (“hurrah”).

Yes, the book can be a pleasure to read, with its clever turns of phrase and poignant examples. But to my mind, the advice in the book will make things unnecessarily hard for humanity – dangerously hard for humanity. That advice will unnecessarily handicap the “Team Human” which the book says it wants to support.

Specifically:

  • The book has too rosy a view of the present state of human nature
  • The book has too limited a view of the positive potential of technology to address the key shortcomings in human nature.

Let’s take these points one at a time.

Human nature

The book refers to human unpredictability, creativity, emotion, and so on, and insists that these aspects of human nature be protected at all costs. Even though machines might do the same tasks as humans, with greater predictability and less histrionics, it doesn’t mean we should hand these tasks over to machines. Thus far, I agree with the argument.

But humans also from time to time manifest a host of destructive characteristics: short-sightedness, stupidity, vengefulness, tribalism, obstructiveness, spitefulness, and so on. It’s possible that these characteristics were, on the whole, useful to humanity in earlier, simpler stages of civilisation. But in present times, with powerful weaponry all around us, these characteristics threaten to plunge humanity into a new dark age.

(I touched on this argument in a recent Transpolitica blogpost, “Flawed humanity, flawed politics”.)

Indeed, despite huge efforts from people all over the globe, the planet is still headed for a potential devastating rise in temperature, due to runaway climate change. What’s preventing an adequate response to this risk is a combination of shortcomings in human society, human politics, human economics, and – not least – human nature.

It’s a dangerous folly to overly romanticise human nature. We humans can, at times, be awful brutes. Our foibles aren’t just matters for bemusement. Our foibles should terrify us.

unfit-for-the-future

I echo the thoughts expressed in a landmark 2012 Philosophy Now article by  Professors Julian Savulescu and Ingmar Persson, “Unfit for the Future: The Urgent Need for Moral Enhancement”:

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.

We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…

Underestimating technology

This brings me to the second point where Gerd’s book misfires: its dogmatic dismissal of the possibility of technology to make any significant improvement in “soft” areas of human life, such as emotional intelligence, creativity, and intuition. The book asserts that whilst software might be able to mimic emotions, these emotions will have no real value. For example, no computer would be able to talk to a two year old human child, and hold its attention.

This assessment demonstrates a major blindspot regarding the ways in which software can already provide strong assistance for people suffering from autism, self-doubt, early stage dementia, or other emotional or social deficits. As one example, consider a Guardian article from last year, “How robots are helping children with autism”.

zeno-the-smiling-robot-008

Consider also this comment from Dr Lucy Maddox, an NHS clinical psychologist and lecturer:

There are loads of [computer] apps that claim to use psychological principles to increase wellbeing in some way, encouraging you to keep track of your mood, to manage worry, to influence what you dream about … Can an app really distil something useful from psychological research and plug you into some life-influencing wisdom? I think some can…

This discussion brings to mind the similar dismissals, from the 1970s and early 1980s, of the possibility that the technology of in-vitro fertilisation (“test-tube babies”) could result in fully human babies. The suggestion was that any such “devilish” technology would result in babies that somehow lacked souls. Here’s a comment from Philip Ball from New Humanist:

Doubts about the artificial being’s soul are still with us, although more often expressed now in secular terms: the fabricated person is denied genuine humanity. He or she is thought to be soulless in the colloquial sense: lacking love, warmth, human feeling. In a poll conducted for Life in the early days of IVF research, 39 per cent of women and 45 per cent of men doubted that an “in vitro child would feel love for family”. (Note that it is the sensibilities of the child, not of the parents, that are impaired.) A protest note placed on the car of a Californian fertility doctor when he first began offering an IVF service articulated the popular view more plainly: “Test tube babies have no souls.”

In 1978 Leon Kass – said, later, to be the favourite bioethicist of President George W. Bush – thundered his opposition to in-vitro fertilisation  as follows:

More is at stake [with IVF research] than in ordinary biomedical research or in experimenting with human subjects at risk of bodily harm. At stake is the idea of the humanness of our human life and the meaning of our embodiment, our sexual being, and our relation to ancestors and descendants.

These comments by Kass have strong echoes to the themes developed by Gerd in Technology vs. Humanity.

It turned out, contrary to Kass’s dire forecasts, that human society was more than capable of taking in its stride the opportunities provided by IVF technology. Numerous couples found great joy through that technology. Numerous wonderful children were brought into existence in that way.

It ought to be the same, in due course, with the opportunities provided by technologies to enhance our emotional intelligence, our creativity, our intuition, our compassion, our sociability, and so on. Applied wisely and thoughtfully, these technologies will allow the full potential of humanity to be reached – rather than being sabotaged by our innate shortcomings.

Emphatically, I’m not saying we should be rushing into anything. We need to approach the potential offered by these new technologies with great thoughtfulness. And with a more open mind than Gerd displays.

Dogmatism

I found my head shaking in disbelief at many of the paragraphs in Technology vs. Humanity. For examples, here’s Gerd’s description of the capabilities of Virtual Reality (VR):

Virtual travel technologies such as Facebook’s Oculus Rift, Samsung VR, and Microsoft’s HoloLens are just beginning to provide us with a very real feeling for what it would be like to raft the Amazon River or climb Mount Fuji. These are already very interesting experiences that will certainly change our way of experiencing reality, of communicating, of working, and of learning… [but] there is still a huge difference between these new ways to experience alternate realities and real life. Picture yourself standing in the middle of a crowded bazaar in Mumbai, India, for just two minutes. Then, compare the memories you would have accumulated in a very short time with those from a much longer but simulated experience using the most advanced systems available today or in the near future. The smells, the sounds and sights – all of these are a thousand times more intense than what even the most advanced gadgetry, fuelled by exponential gains, could ever hope to simulate.

“A thousand times more intense”? More intense than what “the most advanced gadgetry could ever hope to simulate”? Ever?! I see these sweeping claims as an evidence of a closed mind. The advice from elsewhere in the book was better: “gradually, and then suddenly”. The intensity of the emotional experience from VR technology is likely to increase gradually, and then suddenly.

Opening the book to another page, my attention is drawn to the exaggeration in another passage, in the discussion of the possibility of ectogenesis (growing a baby outside a woman’s body in an artificial womb):

I believe it would be utterly dehumanising and detrimental for a baby to be born in such a way.

During his presentation at London Futurists, Gerd used labelled the technology of ectogenesis as “jerk tech”. In discussion in the Marlborough Arms pub after the meetup, several women attendees remarked that they thought only a man could take such a high-handed, dismissive approach to this technology. They emphasised that they were unsure whether they would personally want to take advantage of ectogenesis, but they thought the possibility should be kept open.

Note: for a book that takes a much more thoughtful approach to the possibilities of using technology to transform genetic choice, I recommend Babies by Design: The Ethics of Genetic Choice” by Ronald Green.

babies-by-design

Transhumanism

The viewpoint I’m advocating, in this review of Technology vs. Humanity, is transhumanism:

…a way of thinking about the future that is based on the premise that the human species in its current form does not represent the end of our development but rather a comparatively early phase.

Oxford philosopher Nick Bostrom puts it like this:

Transhumanists view human nature as a work-in-progress, a half-baked beginning that we can learn to remold in desirable ways. Current humanity need not be the endpoint of evolution. Transhumanists hope that by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman beings with vastly greater capacities than present human beings have.

One of the best introductions to the ideas of transhumanism is in the evocative “Letter to Mother Nature” written in 1999 by Max More. It starts as follows:

Dear Mother Nature:

Sorry to disturb you, but we humans—your offspring—come to you with some things to say. (Perhaps you could pass this on to Father, since we never seem to see him around.) We want to thank you for the many wonderful qualities you have bestowed on us with your slow but massive, distributed intelligence. You have raised us from simple self-replicating chemicals to trillion-celled mammals. You have given us free rein of the planet. You have given us a life span longer than that of almost any other animal. You have endowed us with a complex brain giving us the capacity for language, reason, foresight, curiosity, and creativity. You have given us the capacity for self-understanding as well as empathy for others.

Mother Nature, truly we are grateful for what you have made us. No doubt you did the best you could. However, with all due respect, we must say that you have in many ways done a poor job with the human constitution. You have made us vulnerable to disease and damage. You compel us to age and die—just as we’re beginning to attain wisdom. You were miserly in the extent to which you gave us awareness of our somatic, cognitive, and emotional processes. You held out on us by giving the sharpest senses to other animals. You made us functional only under narrow environmental conditions. You gave us limited memory, poor impulse control, and tribalistic, xenophobic urges. And, you forgot to give us the operating manual for ourselves!

What you have made us is glorious, yet deeply flawed. You seem to have lost interest in our further evolution some 100,000 years ago. Or perhaps you have been biding your time, waiting for us to take the next step ourselves. Either way, we have reached our childhood’s end.

We have decided that it is time to amend the human constitution.

We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence. We intend to make you proud of us. Over the coming decades we will pursue a series of changes to our own constitution, initiated with the tools of biotechnology guided by critical and creative thinking. In particular, we declare the following seven amendments to the human constitution…

In contrast, this is what Gerd says about transhumanism (with similar assertions being scattered throughout his book):

Transhumanism, with its lemming-like rush to the edge of the universe, represents the scariest of all present options.

What “lemming-like rush”? Where’s the “lemming-like rush” in the writings of Nick Bostrom (who co-founded the World Transhumanist Association in 1998)? Recall from his definition,

…by responsible use of science, technology, and other rational means we shall eventually manage to become posthuman beings with vastly greater capacities than present human beings have

And consider the sixth proposed “human constitutional amendment” from the letter by Max More:

Amendment No.6: We will cautiously yet boldly reshape our motivational patterns and emotional responses in ways we, as individuals, deem healthy. We will seek to improve upon typical human emotional excesses, bringing about refined emotions. We will strengthen ourselves so we can let go of unhealthy needs for dogmatic certainty, removing emotional barriers to rational self-correction.

As Max emphasised earlier in his Letter,

We do not do this lightly, carelessly, or disrespectfully, but cautiously, intelligently, and in pursuit of excellence

To Gerd’s puzzling claim that transhumanists are blind to the potential risks of new technology, let me exhibit as counter-evidence the nearest thing to a canonical document uniting transhumanist thinking – the “Transhumanist Declaration”. Of its eight clauses, at least half emphasise the potential drawbacks of an uncritical approach to technology:

  1. Humanity stands to be profoundly affected by science and technology in the future. We envision the possibility of broadening human potential by overcoming aging, cognitive shortcomings, involuntary suffering, and our confinement to planet Earth.
  2. We believe that humanity’s potential is still mostly unrealized. There are possible scenarios that lead to wonderful and exceedingly worthwhile enhanced human conditions.
  3. We recognize that humanity faces serious risks, especially from the misuse of new technologies. There are possible realistic scenarios that lead to the loss of most, or even all, of what we hold valuable. Some of these scenarios are drastic, others are subtle. Although all progress is change, not all change is progress.
  4. Research effort needs to be invested into understanding these prospects. We need to carefully deliberate how best to reduce risks and expedite beneficial applications. We also need forums where people can constructively discuss what should be done, and a social order where responsible decisions can be implemented.
  5. Reduction of existential risks, and development of means for the preservation of life and health, the alleviation of grave suffering, and the improvement of human foresight and wisdom should be pursued as urgent priorities, and heavily funded.
  6. Policy making ought to be guided by responsible and inclusive moral vision, taking seriously both opportunities and risks, respecting autonomy and individual rights, and showing solidarity with and concern for the interests and dignity of all people around the globe. We must also consider our moral responsibilities towards generations that will exist in the future.
  7. We advocate the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.
  8. We favour allowing individuals wide personal choice over how they enable their lives. This includes use of techniques that may be developed to assist memory, concentration, and mental energy; life extension therapies; reproductive choice technologies; cryonics procedures; and many other possible human modification and enhancement technologies.

It’s a pity that the editors and reviewers of Gerd’s book did not draw his attention to the many mistakes and misunderstandings of transhumanism that his book contains. My best guess is that the book was produced in a rush. (That would explain the many other errors of fact that are dotted throughout the various chapters.)

To be clear, I accept that many criticisms can be made regarding transhumanism. In an article I wrote for H+Pedia, I collected a total of 18 different criticisms. In that article, I seek to show, in each case,

  • Where these criticisms miss the mark
  • Where these criticisms have substance – so that transhumanists ought to pay attention.

That article – like all other H+Pedia articles – is open for further contributions. Either edit the page directly. Or raise some comments on the associated “Discussion” page.

The vital need for an improved conversation

The topics covered in Technology vs. Humanity have critical importance. A much greater proportion of humanity’s collective attention should be focused onto these topics. To that extent, I fully support Gerd’s call for an improved global conversation on the risks and opportunities of the forthcoming impact of accelerating technology.

During that conversation, each of us will likely find some of our opinions changing, as we move beyond an initial “future shock” to a calmer, more informed reflection of the possibilities. We need to move beyond a breathless “gee whiz” and an anguished “oh this is awful”.

The vision of an improved conversation about the future is what has led me to invest so much of my own time over the years in the London Futurists community.

lf-banner

More recently, that same vision has led me to support the H+Pedia online wiki – a Humanity+ project to spread accurate, accessible, non-sensational information about transhumanism and futurism among the general public.

banner

As the welcome page states,

H+Pedia welcomes constructive contributions from everyone interested in the future of humanity.

By all means get involved! Team Human deserves your support. Team Human also deserves the best information, free of dogmatism, hype, insecurity, or commercial pressures. Critically, Team Human deserves not to be deprived of access to the smart transformational technology of the near future that can become the source of its greatest flourishing.

11 April 2015

Opening Pandora’s box

Should some conversations be suppressed?

Are there ideas which could prove so incendiary, and so provocative, that it would be better to shut them down?

Should some concepts be permanently locked into a Pandora’s box, lest they fly off and cause too much chaos in the world?

As an example, consider this oft-told story from the 1850s, about the dangers of spreading the idea of that humans had evolved from apes:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

More recently, there’s been a growing worry about spreading the idea that AGI (Artificial General Intelligence) could become an apocalyptic menace. The worry is that any discussion of that idea could lead to public hostility against the whole field of AGI. Governments might be panicked into shutting down these lines of research. And self-appointed militant defenders of the status quo might take up arms against AGI researchers. Perhaps, therefore, we should avoid any public mention of potential downsides of AGI. Perhaps we should pray that these downsides don’t become generally known.

tumblr_static_transcendence_rift_logoThe theme of armed resistance against AGI researchers features in several Hollywood blockbusters. In Transcendence, a radical anti-tech group named “RIFT” track down and shoot the AGI researcher played by actor Johnny Depp. RIFT proclaims “revolutionary independence from technology”.

As blogger Calum Chace has noted, just because something happens in a Hollywood movie, it doesn’t mean it can’t happen in real life too.

In real life, “Unabomber” Ted Kaczinski was so fearful about the future destructive potential of technology that he sent 16 bombs to targets such as universities and airlines over the period 1978 to 1995, killing three people and injuring 23. Kaczinski spelt out his views in a 35,000 word essay Industrial Society and Its Future.

Kaczinki’s essay stated that “the Industrial Revolution and its consequences have been a disaster for the human race”, defended his series of bombings as an extreme but necessary step to attract attention to how modern technology was eroding human freedom, and called for a “revolution against technology”.

Anticipating the next Unabombers

unabomber_ely_coverThe Unabomber may have been an extreme case, but he’s by no means alone. Journalist Jamie Bartlett takes up the story in a chilling Daily Telegraph article “As technology swamps our lives, the next Unabombers are waiting for their moment”,

In 2011 a new Mexican group called the Individualists Tending toward the Wild were founded with the objective “to injure or kill scientists and researchers (by the means of whatever violent act) who ensure the Technoindustrial System continues its course”. In 2011, they detonated a bomb at a prominent nano-technology research centre in Monterrey.

Individualists Tending toward the Wild have published their own manifesto, which includes the following warning:

We employ direct attacks to damage both physically and psychologically, NOT ONLY experts in nanotechnology, but also scholars in biotechnology, physics, neuroscience, genetic engineering, communication science, computing, robotics, etc. because we reject technology and civilisation, we reject the reality that they are imposing with ALL their advanced science.

Before going any further, let’s agree that we don’t want to inflame the passions of would-be Unabombers, RIFTs, or ITWs. But that shouldn’t lead to whole conversations being shut down. It’s the same with criticism of religion. We know that, when we criticise various religious doctrines, it may inflame jihadist zeal. How dare you offend our holy book, and dishonour our exalted prophet, the jihadists thunder, when they cannot bear to hear our criticisms. But that shouldn’t lead us to cowed silence – especially when we’re aware of ways in which religious doctrines are damaging individuals and societies (by opposition to vaccinations or blood transfusions, or by denying female education).

Instead of silence (avoiding the topic altogether), what these worries should lead us to is a more responsible, inclusive, measured conversation. That applies for the drawbacks of religion. And it applies, too, for the potential drawbacks of AGI.

Engaging conversation

The conversation I envisage will still have its share of poetic effect – with risks and opportunities temporarily painted more colourfully than a fully sober evaluation warrants. If we want to engage people in conversation, we sometimes need to make dramatic gestures. To squeeze a message into a 140 character-long tweet, we sometimes have to trim the corners of proper spelling and punctuation. Similarly, to make people stop in their tracks, and start to pay attention to a topic that deserves fuller study, some artistic license may be appropriate. But only if that artistry is quickly backed up with a fuller, more dispassionate, balanced analysis.

What I’ve described here is a two-phase model for spreading ideas about disruptive technologies such as AGI:

  1. Key topics can be introduced, in vivid ways, using larger-than-life characters in absorbing narratives, whether in Hollywood or in novels
  2. The topics can then be rounded out, in multiple shades of grey, via film and book reviews, blog posts, magazine articles, and so on.

Since I perceive both the potential upsides and the potential downsides of AGI as being enormous, I want to enlarge the pool of people who are thinking hard about these topics. I certainly don’t want the resulting discussion to slide off to an extreme point of view which would cause the whole field of AGI to be suspended, or which would encourage active sabotage and armed resistance against it. But nor do I want the discussion to wither away, in a way that would increase the likelihood of adverse unintended outcomes from aberrant AGI.

Welcoming Pandora’s Brain

cropped-cover-2That’s why I welcome the recent publication of the novel “Pandora’s Brain”, by the above-mentioned blogger Calum Chace. Pandora’s Brain is a science and philosophy thriller that transforms a series of philosophical concepts into vivid life-and-death conundrums that befall the characters in the story. Here’s how another science novellist, William Hertling, describes the book:

Pandora’s Brain is a tour de force that neatly explains the key concepts behind the likely future of artificial intelligence in the context of a thriller novel. Ambitious and well executed, it will appeal to a broad range of readers.

In the same way that Suarez’s Daemon and Naam’s Nexus leaped onto the scene, redefining what it meant to write about technology, Pandora’s Brain will do the same for artificial intelligence.

Mind uploading? Check. Human equivalent AI? Check. Hard takeoff singularity? Check. Strap in, this is one heck of a ride.

Mainly set in the present day, the plot unfolds in an environment that seems reassuringly familiar, but which is overshadowed by a combination of both menace and promise. Carefully crafted, and absorbing from its very start, the book held my rapt attention throughout a series of surprise twists, as various personalities react in different ways to a growing awareness of that menace and promise.

In short, I found Pandora’s Brain to be a captivating tale of developments in artificial intelligence that could, conceivably, be just around the corner. The imminent possibility of these breakthroughs cause characters in the book to re-evaluate many of their cherished beliefs, and will lead most readers to several “OMG” realisations about their own philosophies of life. Apple carts that are upended in the processes are unlikely ever to be righted again. Once the ideas have escaped from the pages of this Pandora’s box of a book, there’s no going back to a state of innocence.

But as I said, not everyone is enthralled by the prospect of wider attention to the “menace” side of AGI. Each new novel or film in this space has the potential of stirring up a negative backlash against AGI researchers, potentially preventing them from doing the work that would deliver the powerful “promise” side of AGI.

The dual potential of AGI

FLIThe tremendous dual potential of AGI was emphasised in an open letter published in January by the Future of Life Institute:

There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

“The eradication of disease and poverty” – these would be wonderful outcomes from the project to create AGI. But the lead authors of that open letter, including physicist Stephen Hawking and AI professor Stuart Russell, sounded their own warning note:

Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks. In the near term, world militaries are considering autonomous-weapon systems that can choose and eliminate targets; the UN and Human Rights Watch have advocated a treaty banning such weapons. In the medium term, as emphasised by Erik Brynjolfsson and Andrew McAfee in The Second Machine Age, AI may transform our economy to bring both great wealth and great dislocation…

One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.

They followed up with this zinger:

So, facing possible futures of incalculable benefits and risks, the experts are surely doing everything possible to ensure the best outcome, right? Wrong… Although we are facing potentially the best or worst thing to happen to humanity in history, little serious research is devoted to these issues outside non-profit institutes… All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.

Criticisms

Critics give a number of reasons why they see these fears as overblown. To start with, they argue that the people raising the alarm – Stephen Hawking, serial entrepreneur Elon Musk, Oxford University philosophy professor Nick Bostrom, and so on – lack their own expertise in AGI. They may be experts in black hole physics (Hawking), or in electric cars (Musk), or in academic philosophy (Bostrom), but that gives them no special insights into the likely course of development of AGI. Therefore we shouldn’t pay particular attention to what they say.

A second criticism is that it’s premature to worry about the advent of AGI. AGI is still situated far into the future. In this view, as stated by Demis Hassabis, founder of DeepMind,

We’re many, many decades away from anything, any kind of technology that we need to worry about.

The third criticism is that it will be relatively simple to stop AGI causing any harm to humans. AGI will be a tool to humans, under human control, rather than having its own autonomy. This view is represented by this tweet by science populariser Neil deGrasse Tyson:

Seems to me, as long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

I hear all these criticisms, but they’re by no means the end of the discussion. They’re no reason to terminate the discussion about AGI risks. That’s the argument I’m going to make in the remainder of this blogpost.

By the way, you’ll find all these of these criticisms mirrored in the course of the novel Pandora’s Brain. That’s another reason I recommend that people should read that book. It manages to bring a great deal of serious arguments to the table, in the course of entertaining (and sometimes frightening) the reader.

Answering the criticisms: personnel

Elon Musk, one of the people who have raised the alarm about AGI risks, lacks any PhD in Artificial Intelligence to his name. It’s the same with Stephen Hawking and with Nick Bostrom. On the other hand, others who are raising the alarm do have relevant qualifications.

AI a modern approachConsider as just one example Stuart Russell, who is a computer-science professor at the University of California, Berkeley and co-author of the 1152-page best-selling text-book “Artificial Intelligence: A Modern Approach”. This book is described as follows:

Artificial Intelligence: A Modern Approach, 3rd edition offers the most comprehensive, up-to-date introduction to the theory and practice of artificial intelligence. Number one in its field, this textbook is ideal for one or two-semester, undergraduate or graduate-level courses in Artificial Intelligence.

Moreover, other people raising the alarm include some the giants of the modern software industry:

Wozniak put his worries as follows – in an interview for the Australian Financial Review:

“Computers are going to take over from humans, no question,” Mr Wozniak said.

He said he had long dismissed the ideas of writers like Raymond Kurzweil, who have warned that rapid increases in technology will mean machine intelligence will outstrip human understanding or capability within the next 30 years. However Mr Wozniak said he had come to recognise that the predictions were coming true, and that computing that perfectly mimicked or attained human consciousness would become a dangerous reality.

“Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they’ll think faster than us and they’ll get rid of the slow humans to run companies more efficiently,” Mr Wozniak said.

“Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on? I don’t know about that…

And here’s what Bill Gates said on the matter, in an “Ask Me Anything” session on Reddit:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.

Returning to Elon Musk, even his critics must concede he has shown remarkable ability to make new contributions in areas of technology outside his original specialities. Witness his track record with PayPal (a disruption in finance), SpaceX (a disruption in rockets), and Tesla Motors (a disruption in electric batteries and electric cars). And that’s even before considering his contributions at SolarCity and Hyperloop.

Incidentally, Musk puts his money where his mouth is. He has donated $10 million to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.

I sum this up as follows: the people raising the alarm in recent months about the risks of AGI have impressive credentials. On occasion, their sound-bites may cut corners in logic, but they collectively back up these sound-bites with lengthy books and articles that deserve serious consideration.

Answering the criticisms: timescales

I have three answers to the comment about timescales. The first is to point out that Demis Hassabis himself sees no reason for any complacency, on account of the potential for AGI to require “many decades” before it becomes a threat. Here’s the fuller version of the quote given earlier:

We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

(Emphasis added.)

Second, the community of people working on AGI has mixed views on timescales. The Future of Life Institute ran a panel discussion in Puerto Rico in January that addressed (among many other topics) “Creating human-level AI: how and when”. Dileep George of Vicarious gave the following answer about timescales in his slides (PDF):

Will we solve the fundamental research problems in N years?

N <= 5: No way
5 < N <= 10: Small possibility
10 < N <= 20: > 50%.

In other words, in his view, there’s a greater than 50% chance that artificial general human-level intelligence will be solved within 20 years.

SuperintelligenceThe answers from the other panellists aren’t publicly recorded (the event was held under Chatham House rules). However, Nick Bostrom has conducted several surveys among different communities of AI researchers. The results are included in his book Superintelligence: Paths, Dangers, Strategies. The communities surveyed included:

  • Participants at an international conference: Philosophy & Theory of AI
  • Participants at another international conference: Artificial General Intelligence
  • The Greek Association for Artificial Intelligence
  • The top 100 cited authors in AI.

In each case, participants were asked for the dates when they were 90% sure human-level AGI would be achieved, 50% sure, and 10% sure. The average answers were:

  • 90% likely human-level AGI is achieved: 2075
  • 50% likely: 2040
  • 10% likely: 2022.

If we respect what this survey says, there’s at least a 10% chance of breakthrough developments within the next ten years. Therefore it’s no real surprise that Hassabis says

It’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.

Third, I’ll give my own reasons for why progress in AGI might speed up:

  • Computer hardware is likely to continue to improve – perhaps utilising breakthroughs in quantum computing
  • Clever software improvements can increase algorithm performance even more than hardware improvements
  • Studies of the human brain, which are yielding knowledge faster than ever before, can be translated into “neuromorphic computing”
  • More people are entering and studying AI than ever before, in part due to MOOCs, such as that from Stanford University
  • There are more software components, databases, tools, and methods available for innovative recombination
  • AI methods are being accelerated for use in games, financial trading, malware detection (and in malware itself), and in many other industries
  • There could be one or more “Sputnik moments” causing society to buckle up its motivation to more fully support AGI research (especially when AGI starts producing big benefits in healthcare diagnosis).

Answering the critics: control

I’ve left the hardest question to last. Could there be relatively straightforward ways to keep AGI under control? For example, would it suffice to avoid giving AGI intentions, or emotions, or autonomy?

For example, physics professor and science populariser Michio Kaku speculates as follows:

No one knows when a robot will approach human intelligence, but I suspect it will be late in the 21st century. Will they be dangerous? Possibly. So I suggest we put a chip in their brain to shut them off if they have murderous thoughts.

And as mentioned earlier, Neil deGrasse Tyson proposes,

As long as we don’t program emotions into Robots, there’s no reason to fear them taking over the world.

Nick Bostrom devoted a considerable portion of his book to this “Control problem”. Here are some reasons I think we need to continue to be extremely careful:

  • Emotions and intentions might arise unexpectedly, as unplanned side-effects of other aspects of intelligence that are built into software
  • All complex software tends to have bugs; it may fail to operate in the way that we instruct it
  • The AGI software will encounter many situations outside of those we explicitly anticipated; the response of the software in these novel situations may be to do “what we asked it to do” but not what we would have wished it to do
  • Complex software may be vulnerable to having its functionality altered, either by external hacking, or by well-intentioned but ill-executed self-modification
  • Software may find ways to keep its inner plans hidden – it may have “murderous thoughts” which it prevents external observers from noticing
  • More generally, black-box evolution methods may result in software that works very well in a large number of circumstances, but which will go disastrously wrong in new circumstances, all without the actual algorithms being externally understood
  • Powerful software can have unplanned adverse effects, even without any consciousness or emotion being present; consider battlefield drones, infrastructure management software, financial investment software, and nuclear missile detection software
  • Software may be designed to be able to manipulate humans, initially for purposes akin to advertising, or to keep law and order, but these powers may evolve in ways that have worse side effects.

A new Columbus?

christopher-columbus-shipsA number of the above thoughts started forming in my mind as I attended the Singularity University Summit in Seville, Spain, a few weeks ago. Seville, I discovered during my visit, was where Christopher Columbus persuaded King Ferdinand and Queen Isabella of Spain to fund his proposed voyage westwards in search of a new route to the Indies. It turns out that Columbus succeeded in finding the new continent of America only because he was hopelessly wrong in his calculation of the size of the earth.

From the time of the ancient Greeks, learned observers had known that the earth was a sphere of roughly 40 thousand kilometres in circumference. Due to a combination of mistakes, Columbus calculated that the Canary Islands (which he had often visited) were located only about 4,440 km from Japan; in reality, they are about 19,000 km apart.

Most of the countries where Columbus pitched the idea of his westward journey turned him down – believing instead the figures for the larger circumference of the earth. Perhaps spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to support his adventure. Fortunately for Columbus, a large continent existed en route to Asia, allowing him landfall. And the rest is history. That history included the near genocide of the native inhabitants by conquerors from Europe. Transmission of European diseases compounded the misery.

It may be the same with AGI. Rational observers may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

It all adds to the argument for keeping our wits fully about us. We should use every means at our disposal to think through options in advance. This includes well-grounded fictional explorations, such as Pandora’s Brain, as well as the novels by William Hertling. And it also includes the kinds of research being undertaken by the Future of Life Institute and associated non-profit organisations, such as CSER in Cambridge, FHI in Oxford, and MIRI (the Machine Intelligence Research Institute).

Let’s keep this conversation open – it’s far too important to try to shut it down.

Footnote: Vacancies at the Centre for the Study of Existential Risk

I see that the Cambridge University CSER (Centre for the Study of Existential Risk) have four vacancies for Research Associates. From the job posting:

Up to four full-time postdoctoral research associates to work on the project Towards a Science of Extreme Technological Risk (ETR) within the Centre for the Study of Existential Risk (CSER).

CSER’s research focuses on the identification, management and mitigation of possible extreme risks associated with future technological advances. We are currently based within the University’s Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). Our goal is to bring together some of the best minds from academia, industry and the policy world to tackle the challenges of ensuring that powerful new technologies are safe and beneficial. We focus especially on under-studied high-impact risks – risks that might result in a global catastrophe, or even threaten human extinction, even if only with low probability.

The closing date for applications is 24th April. If you’re interested, don’t delay!

Blog at WordPress.com.