dw2

26 February 2023

Ostriches and AGI risks: four transformations needed

Filed under: AGI, risks, Singularity, Singularity Principles — Tags: , — David Wood @ 12:48 am

I confess to having been pretty despondent at various times over the last few days.

The context: increased discussions on social media triggered by recent claims about AGI risk – such as I covered in my previous blogpost.

The cause of my despondency: I’ve seen far too many examples of people with scant knowledge expressing themselves with unwarranted pride and self-certainty.

I call these people the AGI ostriches.

It’s impossible for AGI to exist, one of these ostriches squealed. The probability that AGI can exist is zero.

Anyone concerned about AGI risks, another opined, fails to understand anything about AI, and has just got their ideas from Hollywood or 1950s science fiction.

Yet another claimed: Anything that AGI does in the world will be the inscrutable cosmic will of the universe, so we humans shouldn’t try to change its direction.

Just keep your hand by the off switch, thundered another. Any misbehaving AGI can easily be shut down. Problem solved! You didn’t think of that, did you?

Don’t give the robots any legs, shrieked yet another. Problem solved! You didn’t think of that, did you? You fool!

It’s not the ignorance that depressed me. It was the lack of interest shown by the AGI ostriches regarding alternative possibilities.

I had tried to engage some of the ostriches in conversation. Try looking at things this way, I asked. Not interested, came the answer. Discussions on social media never change any minds, so I’m not going to reply to you.

Click on this link to read a helpful analysis, I suggested. No need, came the answer. Nothing you have written could possibly be relevant.

And the ostriches rejoiced in their wilful blinkeredness. There’s no need to look in that direction, they said. Keep wearing the blindfolds!

(The following image is by the Midjourney AI.)

But my purpose in writing this blogpost isn’t to complain about individual ostriches.

Nor is my purpose to lament the near-fatal flaws in human nature, including our many cognitive biases, our emotional self-sabotage, and our perverse ideological loyalties.

Instead, my remarks will proceed in a different direction. What most needs to change isn’t the ostriches.

It’s the community of people who want to raise awareness of the catastrophic risks of AGI.

That includes me.

On reflection, we’re doing four things wrong. Four transformations are needed, urgently.

Without these changes taking place, it won’t be surprising if the ostriches continue to behave so perversely.

(1) Stop tolerating the Singularity Shadow

When they briefly take off their blindfolds, and take a quick peak into the discussions about AGI, ostriches often notice claims that are, in fact, unwarranted.

These claims confuse matters. They are overconfident claims about what can be expected about the advent of AGI, also known as the Technological Singularity. These claims form part of what I call the Singularity Shadow.

There are seven components in the Singularity Shadow:

  • Singularity timescale determinism
  • Singularity outcome determinism
  • Singularity hyping
  • Singularity risk complacency
  • Singularity term overloading
  • Singularity anti-regulation fundamentalism
  • Singularity preoccupation

If you’ve not come across the concept before, here’s a video all about it:

Or you can read this chapter from The Singularity Principles on the concept: “The Singularity Shadow”.

People who (like me) point out the dangers of badly designed AGI often too easily make alliances with people in the Singularity Shadow. After all, both groups of people:

  • Believe that AGI is possible
  • Believe that AGI might happen soon
  • Believe that AGI is likely to be cause an unprecedented transformation in the human condition.

But the Singularity Shadow causes far too much trouble. It is time to stop being tolerant of its various confusions, wishful thinking, and distortions.

To be clear, I’m not criticising the concept of the Singularity. Far from it. Indeed, I consider myself a singularitarian, with the meaning I explain here. I look forward to more and more people similarly adopting this same stance.

It’s the distortions of that stance that now need to be countered. We must put our own house in order. Sharply.

Otherwise the ostriches will continue to be confused.

(2) Clarify the credible risk pathways

The AI paperclip maximiser has had its day. It needs to be retired.

Likewise the cancer-solving AI that solves cancer by, perversely, killing everyone on the planet.

Likewise the AI that “rescues” a woman from a burning building by hurling her out of the 20th floor window.

In the past, these thought experiments all helped the discussion about AGI risks, among people who were able to see the connections between these “abstract” examples and more complicated real-world scenarios.

But as more of the general public shows an interest in the possibilities of advanced AI, we urgently need a better set of examples. Explained, not by mathematics, nor by cartoonish simplifications, but in plain everyday language.

I’ve tried to offer some examples, for example in the section “Examples of dangers with uncontrollable AI” in the chapter “The AI Control Problem” of my book The Singularity Principles.

But it seems these scenarios still fail to convince. The ostriches find themselves bemused. Oh, that wouldn’t happen, they say.

So this needs more work. As soon as possible.

I anticipate starting from themes about which even the most empty-headed ostrich occasionally worries:

  1. The prospects of an arms race involving lethal autonomous weapons systems
  2. The risks from malware that runs beyond the control of the people who originally released it
  3. The dangers of geoengineering systems that seek to manipulate the global climate
  4. The “gain of function” research which can create ultra-dangerous pathogens
  5. The side-effects of massive corporations which give priority to incentives such as “increase click-through”
  6. The escalation in hatred stirred up by automated trolls with more ingenious “fake social media”

On top of these starting points, the scenarios I envision mix in AI systems with increasing power and increasing autonomy – AI systems which are, however, incompletely understood by the people who deploy them, and which might manifest terrible bugs in unexpected circumstances. (After all, AIs include software, and software generally contains bugs.)

If there’s not already a prize competition to encourage clearer communication of such risk scenarios, in ways that uphold credibility as well as comprehensibility, there should be!

(3) Clarify credible solution pathways

Even more important than clarifying the AGI risk scenarios is to clarify some credible pathways to managing these risks.

Without seeing such solutions, ostriches go into an internal negative feedback loop. They think to themselves as follows:

  • Any possible solution to AGI risks seems unlikely to be successful
  • Any possible solution to AGI risks seems likely to have bad consequences in its own right
  • These thoughts are too horrible to contemplate
  • Therefore we had better believe the AGI risks aren’t actually real
  • Therefore anyone who makes AGI risks seem real needs to be silenced, ridiculed, or mocked.

Just as we need better communication of AGI risk scenarios, we need better communication of positive examples that are relevant to potential solutions:

  • Examples of when society collaborated to overcome huge problems which initially seemed impossible
  • Successful actions against the tolerance of drunk drivers, against dangerous features in car design, against the industrial pollutants which caused acid rain, and against the chemicals which depleted the ozone layer
  • Successful actions by governments to limit the powers of corporate monopolies
  • The de-escalation by Ronald Reagan and Mikhail Gorbachev of the terrifying nuclear arms race between the USA and the USSR.

But we also need to make it clearer how AGI risks can be addressed in practice. This includes a better understanding of:

  • Options for AIs that are explainable and interpretable – with the aid of trusted tools built from narrow AI
  • How AI systems can be designed to be free from the unexpected “emergence” of new properties or subgoals
  • How trusted monitoring can be built into key parts of our infrastructure, to provide early warnings of potential AI-induced catastrophic failures
  • How powerful simulation environments can be created to explore potential catastrophic AI failure modes (and solutions to these issues) in the safety of a virtual model
  • How international agreements can be built up, initially from a “coalition of the willing”, to impose powerful penalties in cases when AI is developed or deployed in ways that violate agreed standards
  • How research into AGI safety can be managed much more effectively, worldwide, than is presently the case.

Again, as needed, significant prizes should be established to accelerate breakthroughs in all these areas.

(4) Divide and conquer

The final transformation needed is to divide up the overall huge problem of AGI safety into more manageable chunks.

What I’ve covered above already suggests a number of vitally important sub-projects.

Specifically, it is surely worth having separate teams tasked with investigating, with the utmost seriousness, a range of potential solutions for the complications that advanced AI brings to each of the following:

  1. The prospects of an arms race involving lethal autonomous weapons systems
  2. The risks from malware that runs beyond the control of the people who originally released it
  3. The dangers of geoengineering systems that seek to manipulate the global climate
  4. The “gain of function” research which can create ultra-dangerous pathogens
  5. The side-effects of massive corporations which give priority to incentives such as “increase click-through”
  6. The escalation in hatred stirred up by automated trolls with more ingenious “fake social media”

(Yes, these are the same six scenarios for catastrophic AI risk that I listed in section (2) earlier.)

Rather than trying to “boil the entire AGI ocean”, these projects each appear to require slightly less boiling.

Once candidate solutions have been developed for one or more of these risk scenarios, the outputs from the different teams can be compared with each other.

What else should be added to the lists above?

8 June 2022

Pre-publication review: The Singularity Principles

Filed under: books, Singularity, Singularity Principles — Tags: — David Wood @ 9:23 am

I’ve recently been concentrating on finalising the content of my forthcoming new book, The Singularity Principles.

The reasons why I see this book as both timely and necessary are explained in the extract, below, taken from the introduction to the book

This link provides pointers to the full text of every chapter in the book. (Or use the links in the listing below of the extended table of contents.)

Please get in touch with me if you would prefer to read the pre-publication text in PDF format, rather than on the online HTML pages linked above.

At this stage, I will gratefully appreciate any feedback:

  • Aspects of the book that I should consider changing
  • Aspects of the book that you particularly like.

Feedback on any parts of the book will be welcome. It’s by no means necessary for you to read the entire text. (However, I hope you will find it sufficiently interesting that you will end up reading more than you originally planned…)

By the way, it’s a relatively short book, compared to some others I’ve written. The wordcount is a bit over 50 thousand words. That works out at around 260 pages of fairly large text on 5″x8″ paper.

I will also appreciate any commendations or endorsements, which I can include with the publicity material for the book, to encourage more people to pay attention to it.

The timescale I have in mind: I will release electronic and physical copies of the book some time early next month (July), followed up soon afterward by an audio version.

Therefore, if you’re thinking of dipping into any chapters to provide feedback and/or endorsements, the sooner the better!

Thanks in anticipation!

Preface

This book is dedicated to what may be the most important concept in human history, namely, the Singularity – what it is, what it is not, the steps by which we may reach it, and, crucially, how to make it more likely that we’ll experience a positive singularity rather than a negative singularity.

For now, here’s a simple definition. The Singularity is the emergence of Artificial General Intelligence (AGI), and the associated transformation of the human condition. Spoiler alert: that transformation will be profound. But if we’re not paying attention, it’s likely to be profoundly bad.

Despite the importance of the concept of the Singularity, the subject receives nothing like the attention it deserves. When it is discussed, it often receives scorn or ridicule. Alas, you’ll hear sniggers and see eyes rolling.

That’s because, as I’ll explain, there’s a kind of shadow around the concept – an unhelpful set of distortions that make it harder for people to fully perceive the real opportunities and the real risks that the Singularity brings.

These distortions grow out of a wider confusion – confusion about the complex interplay of forces that are leading society to the adoption of ever-more powerful technologies, including ever-more powerful AI.

It’s my task in this book to dispel the confusion, to untangle the distortions, to highlight practical steps forward, and to attract much more serious attention to the Singularity. The future of humanity is at stake.

Let’s start with the confusion.

Confusion, turbulence, and peril

The 2020s could be called the Decade of Confusion. Never before has so much information washed over everyone, leaving us, all too often, overwhelmed, intimidated, and distracted. Former certainties have dimmed. Long-established alliances have fragmented. Flurries of excitement have pivoted quickly to chaos and disappointment. These are turbulent times.

However, if we could see through the confusion, distraction, and intimidation, what we should notice is that human flourishing is, potentially, poised to soar to unprecedented levels. Fast-changing technologies are on the point of providing a string of remarkable benefits. We are near the threshold of radical improvements to health, nutrition, security, creativity, collaboration, intelligence, awareness, and enlightenment – with these improvements being available to everyone.

Alas, these same fast-changing technologies also threaten multiple sorts of disaster. These technologies are two-edged swords. Unless we wield them with great skill, they are likely to spin out of control. If we remain overwhelmed, intimidated, and distracted, our prospects are poor. Accordingly, these are perilous times.

These dual future possibilities – technology-enabled sustainable superabundance, versus technology-induced catastrophe – have featured in numerous discussions that I have chaired at London Futurists meetups going all the way back to March 2008.

As these discussions have progressed, year by year, I have gradually formulated and refined what I now call the Singularity Principles. These principles are intended:

  • To steer humanity’s relationships with fast-changing technologies,
  • To manage multiple risks of disaster,
  • To enable the attainment of remarkable benefits,
  • And, thereby, to help humanity approach a profoundly positive singularity.

In short, the Singularity Principles are intended to counter today’s widespread confusion, distraction, and intimidation, by providing clarity, credible grounds for hope, and an urgent call to action.

This time it’s different

I first introduced the Singularity Principles, under that name and with the same general format, in the final chapter, “Singularity”, of my 2021 book Vital Foresight: The Case for Active Transhumanism. That chapter is the culmination of a 642 page book. The preceding sixteen chapters of that book set out at some length the challenges and opportunities that these principles need to address.

Since the publication of Vital Foresight, it has become evident to me that the Singularity Principles require a short, focused book of their own. That’s what you now hold in your hands.

The Singularity Principles is by no means the only new book on the subject of the management of powerful disruptive technologies. The public, thankfully, are waking up to the need to understand these technologies better, and numerous authors are responding to that need. As one example, the phrase “Artificial Intelligence”, forms part of the title of scores of new books.

I have personally learned many things from some of these recent books. However, to speak frankly, I find myself dissatisfied by the prescriptions these authors have advanced. These authors generally fail to appreciate the full extent of the threats and opportunities ahead. And even if they do see the true scale of these issues, the recommendations these authors propose strike me as being inadequate.

Therefore, I cannot keep silent.

Accordingly, I present in this new book the content of the Singularity Principles, brought up to date in the light of recent debates and new insights. The book also covers:

  • Why the Singularity Principles are sorely needed
  • The source and design of these principles
  • The significance of the term “Singularity”
  • Why there is so much unhelpful confusion about “the Singularity”
  • What’s different about the Singularity Principles, compared to recommendations of other analysts
  • The kinds of outcomes expected if these principles are followed
  • The kinds of outcomes expected if these principles are not followed
  • How you – dear reader – can, and should, become involved, finding your place in a growing coalition
  • How these principles are likely to evolve further
  • How these principles can be put into practice, all around the world – with the help of people like you.

The scope of the Principles

To start with, the Singularity Principles can and should be applied to the anticipation and management of the NBIC technologies that are at the heart of the current, fourth industrial revolution. NBIC – nanotech, biotech, infotech, and cognotech – is a quartet of four interlinked technological disruptions which are likely to grow significantly stronger as the 2020s unfold. Each of these four technological disruptions has the potential to fundamentally transform large parts of the human experience.

However, the same set of principles can and should also be applied to the anticipation and management of the core technology that will likely give rise to a fifth industrial revolution, namely the technology of AGI (artificial general intelligence), and the rapid additional improvements in artificial superintelligence that will likely follow fast on the footsteps of AGI.

The emergence of AGI is known as the technological singularity – or, more briefly, as the Singularity.

In other words, the Singularity Principles apply both:

  • To the longer-term lead-up to the Singularity, from today’s fast-improving NBIC technologies,
  • And to the shorter-term lead-up to the Singularity, as AI gains more general capabilities.

In both cases, anticipation and management of possible outcomes will be of vital importance.

By the way – in case it’s not already clear – please don’t expect a clever novel piece of technology, or some brilliant technical design, to somehow solve, by itself, the challenges posed by NBIC technologies and AGI. These challenges extend far beyond what could be wrestled into submission by some dazzling mathematical wizardry, by the incorporation of an ingenious new piece of silicon at the heart of every computer, or by any other “quick fix”. Indeed, the considerable effort being invested by some organisations in a search for that kind of fix is, arguably, a distraction from a sober assessment of the bigger picture.

Better technology, better product design, better mathematics, and better hardware can all be part of the full solution. But that full solution also needs, critically, to include aspects of organisational design, economic incentives, legal frameworks, and political oversight. That’s the argument I develop in the chapters ahead.

Extended table of contents

For your convenience, here’s a listing of the main section headings for all the chapters in this book.

0. Preface

  • Confusion, turbulence, and peril
  • This time it’s different
  • The scope of the Principles
  • Collective insight
  • The short form of the Principles
  • The four areas covered by the Principles
  • What lies ahead

1. Background: Ten essential observations

  • Tech breakthroughs are unpredictable (both timing and impact)
  • Potential complex interactions make prediction even harder
  • Changes in human attributes complicate tech changes
  • Greater tech power enables more devastating results
  • Different perspectives assess “good” vs. “bad” differently
  • Competition can be hazardous as well as beneficial
  • Some tech failures would be too drastic to allow recovery
  • A history of good results is no guarantee of future success
  • It’s insufficient to rely on good intentions
  • Wishful thinking predisposes blindness to problems

2. Fast-changing technologies: risks and benefits

  • Technology risk factors
  • Prioritising benefits?
  • What about ethics?
  • The transhumanist stance

2.1 Special complications with artificial intelligence

  • Problems with training data
  • The black box nature of AI
  • Interactions between multiple algorithms
  • Self-improving AI
  • Devious AI
  • Four catastrophic error modes
  • The broader perspective

2.2 The AI Control Problem

  • The gorilla problem
  • Examples of dangers with uncontrollable AI
  • Proposed solutions (which don’t work)
  • The impossibility of full verification
  • Emotion misses the point
  • No off switch
  • The ineffectiveness of tripwires
  • Escaping from confinement
  • The ineffectiveness of restrictions
  • No automatic super ethics
  • Issues with hard-wiring ethical principles

2.3 The AI Alignment Problem

  • Asimov’s Three Laws
  • Ethical dilemmas and trade-offs
  • Problems with proxies
  • The gaming of proxies
  • Simple examples of profound problems
  • Humans disagree
  • No automatic super ethics (again)
  • Other options for answers?

2.4 No easy solutions

  • No guarantees from the free market
  • No guarantees from cosmic destiny
  • Planet B?
  • Humans merging with AI?
  • Approaching the Singularity

3. What is the Singularity?

  • Breaking down the definition
  • Four alternative definitions
  • Four possible routes to the Singularity
  • The Singularity and AI self-awareness
  • Singularity timescales
  • Positive and negative singularities
  • Tripwires and canary signals
  • Moving forward

3.1 The Singularitarian Stance

  • AGI is possible
  • AGI could happen within just a few decades
  • Winner takes all
  • The difficulty of controlling AGI
  • Superintelligence and superethics
  • Not the Terminator
  • Opposition to the Singularitarian Stance

3.2 A complication: the Singularity Shadow

  • Singularity timescale determinism
  • Singularity outcome determinism
  • Singularity hyping
  • Singularity risk complacency
  • Singularity term overloading
  • Singularity anti-regulation fundamentalism
  • Singularity preoccupation
  • Looking forward

3.3 Bad reasons to deny the Singularity

  • The denial of death
  • How special is the human mind?
  • A credible positive vision

4. The question of urgency

  • Factors causing AI to improve
  • 15 options on the table
  • The difficulty of measuring progress
  • Learning from Christopher Columbus
  • The possibility of fast take-off

5. The Singularity Principles in depth

5.1 Analysing goals and potential outcomes

  • Question desirability
  • Clarify externalities
  • Require peer reviews
  • Involve multiple perspectives
  • Analyse the whole system
  • Anticipate fat tails

5.2 Desirable characteristics of tech solutions

  • Reject opacity
  • Promote resilience
  • Promote verifiability
  • Promote auditability
  • Clarify risks to users
  • Clarify trade-offs

5.3 Ensuring development takes place responsibly

  • Insist on accountability
  • Penalise disinformation
  • Design for cooperation
  • Analyse via simulations
  • Maintain human oversight

5.4 Evolution and enforcement

  • Build consensus regarding principles
  • Provide incentives to address omissions
  • Halt development if principles are not upheld
  • Consolidate progress via legal frameworks

6. Key success factors

  • Public understanding
  • Persistent urgency
  • Reliable action against noncompliance
  • Public funding
  • International support
  • A sense of inclusion and collaboration

7. Questions arising

7.1 Measuring human flourishing

  • Some example trade-offs
  • Updating the Universal Declaration of Human Rights
  • Constructing an Index of Human and Social Flourishing

7.2 Trustable monitoring

  • Moore’s Law of Mad Scientists
  • Four projects to reduce the dangers of WMDs
  • Detecting mavericks
  • Examples of trustable monitoring
  • Watching the watchers

7.3 Uplifting politics

  • Uplifting regulators
  • The central role of politics
  • Toward superdemocracy
  • Technology improving politics
  • Transcending party politics
  • The prospects for political progress

7.4 Uplifting education

  • Top level areas of the Vital Syllabus
  • Improving the Vital Syllabus

7.5 To AGI or not AGI?

  • Global action against the creation of AGI?
  • Possible alternatives to AGI?
  • A dividing line between AI and AGI?
  • A practical proposal

7.6 Measuring progress toward AGI

  • Aggregating expert opinions
  • Metaculus predictions
  • Alternative canary signals for AGI
  • AI index reports

7.7. Growing a coalition of the willing

  • Risks and actions

Image credit

The draft book cover shown above includes a design by Pixabay member Ebenezer42.

7 February 2022

Options for controlling artificial superintelligence

What are the best options for controlling artificial superintelligence?

Should we confine it in some kind of box (or simulation), to prevent it from roaming freely over the Internet?

Should we hard-wire into its programming a deep respect for humanity?

Should we avoid it from having any sense of agency or ambition?

Should we ensure that, before it takes any action, it always double-checks its plans with human overseers?

Should we create dedicated “narrow” intelligence monitoring systems, to keep a vigilant eye on it?

Should we build in a self-destruct mechanism, just in case it stops responding to human requests?

Should we insist that it shares its greater intelligence with its human overseers (in effect turning them into cyborgs), to avoid humanity being left behind?

More drastically, should we simply prevent any such systems from coming into existence, by forbidding any research that could lead to artificial superintelligence?

Alternatively, should we give up on any attempt at control, and trust that the superintelligence will be thoughtful enough to always “do the right thing”?

Or is there a better solution?

If you have clear views on this question, I’d like to hear from you.

I’m looking for speakers for a forthcoming London Futurists online webinar dedicated to this topic.

I envision three speakers each taking up to 15 minutes to set out their proposals. Once all the proposals are on the table, the real discussion will begin – with the speakers interacting with each other, and responding to questions raised by the live audience.

The date for this event remains to be determined. I will find a date that is suitable for the speakers who have the most interesting ideas to present.

As I said, please get in touch if you have questions or suggestions about this event.

Image credit: the above graphic includes work by Pixabay user Geralt.

PS For some background, here’s a video recording of the London Futurists event from last Saturday, in which Roman Yampolskiy gave several reasons why control of artificial superintelligence will be deeply difficult.

For other useful background material, see the videos on the Singularity page of the Vital Syllabus project.

20 July 2018

Christopher Columbus and the surprising future of AI

Filed under: AGI, predictability, Singularity — Tags: , , , , — David Wood @ 5:49 pm

There are plenty of critics who are sceptical about the future of AI. The topic has been over-hyped, say these critics. According to these critics, we don’t need to be worried about the longer-term repercussions of AI with superhuman capabilities. We’re many decades – perhaps centuries – from anything approaching AGI (artificial general intelligence) with skills in common sense reasoning matching (or surpassing) that of humans. As for AI destroying jobs, that, too, is a false alarm – or so the critics insist. AI will create at least as many jobs as it destroys.

In my previous blog post, Serious question over PwC’s report on the impact of AI on jobs, I offered some counters to these critics. To my mind, this is no time for complacency: AI could accelerate in its capabilities, and take us by surprise. The kinds of breakthroughs that, in a previous era, might have been expected to take many decades, could actually take place in just a few short years. Rather than burying our head in the sands, denying the possibility of any such acceleration, we need to pay more attention to the trends of technological change and the potential for disruptive new innovations.

The Christopher Columbus angle

Overnight, I’ve been reminded of an argument that I’ve used previously – towards the end of a rather long blogpost. It’s the argument that critics of the future of AI are similar to the critics of Christopher Columbus – the people who said, before his 1492 voyage across the Atlantic in search of a westerly route to Asia, that the effort was bound to be a bad investment.

Bear with me while I retell this analogy.

For years, Columbus tried to drum up support for what most people considered to be a hare-brained scheme. Most observers concluded that Columbus had fallen victim to a significant mistake – he estimated that the distance from the Canary Islands (off the coast of Morocco) to Japan was around 3,700 km, whereas the generally accepted figure was closer to 20,000 km. Indeed, the true size of the sphere of the Earth had been known since the 3rd century BC, due to a calculation by Eratosthenes, based on observations of shadows at different locations.

Accordingly, when Columbus presented his bold proposal to courts around Europe, the learned members of the courts time and again rejected the idea. The effort would be hugely larger than Columbus supposed, they said. It would be a fruitless endeavour.

Columbus, an autodidact, wasn’t completely crazy. He had done a lot of his own research. However, he was misled by a number of factors:

  • Confusion between various ancient units of distance (the “Arabic mile” and the “Roman mile”)
  • How many degrees of latitude the Eurasian landmass occupied (225 degrees versus 150 degrees)
  • A speculative 1474 map, by the Florentine astronomer Toscanelli, which showed a mythical island “Antilla” located to the east of Japan (named as “Cippangu” in the map).

You can read the details in the Wikipedia article on Columbus, which provides numerous additional reference points. The article also contains a copy of Toscanelli’s map, with the true location of the continents of North and South America superimposed for reference.

No wonder Columbus thought his plan might work after all. Nevertheless, the 1490s equivalents of today’s VCs kept saying “No” to his pitches. Finally, spurred on by competition with the neighbouring Portuguese (who had, just a few years previously, successfully navigated to the Indian ocean around the tip of Africa), the Spanish king and queen agreed to take the risk of supporting his adventure. After stopping in the Canaries to restock, the Nina, the Pinta, and the Santa Maria set off westward. Five weeks later, the crew spotted land, in what we now call the Bahamas. And the rest is history.

But it wasn’t the history expected by Columbus, or by his backers, or by his critics. No-one had foreseen that a huge continent existed in the oceans in between Europe and Japan. None of the ancient writers – either secular or religious – had spoken of such a continent. Nevertheless, once Columbus had found it, the history of the world proceeded in a very different direction – including mass deaths from infectious diseases transmitted from the European sailors, genocide and cultural apocalypse, and enormous trade in both goods and slaves. In due course, it would the the ingenuity and initiatives of people subsequently resident in the Americas that propelled humans beyond the Earth’s atmosphere all the way to the moon.

What does this have to do with the future of AI?

Rational critics may have ample justification in thinking that true AGI is located many decades in the future. But this fact does not deter a multitude of modern-day AGI explorers from setting out, Columbus-like, in search of some dramatic breakthroughs. And who knows what intermediate forms of AI might be discovered, unexpectedly?

Just as the contemporaries of Columbus erred in presuming they already knew all the large features of the earth’s continents (after all: if America really existed, surely God would have written about it in the Bible…), modern-day critics of AI can err in presuming they already know all the large features of the landscape of possible artificial minds.

When contemplating the space of all possible minds, some humility is in order. We cannot foretell in advance what configurations of intelligence are possible. We don’t know what may happen, if separate modules of reasoning are combined in innovative ways. After all, there are many aspects of the human mind which are still in doubt.

When critics say that it is unlikely that present-day AI mechanisms will take us all the way to AGI, they are very likely correct. But it would be a horrendous error to draw the conclusion that meaningful new continents of AI capability are inevitably still the equivalent of 20,000 km into the distance. The fact is, we simply don’t know. And for that reason, we should keep an open mind.

One day soon, indeed, we might read news of some new “AUI” having been discovered – some Artificial Unexpected Intelligence, which changes history. It won’t be AGI, but it could have all kinds of unexpected consequences.

Beyond the Columbus analogy

Every analogy has its drawbacks. Here are three ways in which the discovery of an AUI could be different from the discovery by Columbus of America:

  1. In the 1490s, there was only one Christopher Columbus. Nowadays, there are scores (perhaps hundreds) of schemes underway to try to devise new models of AI. Many of these are proceeding with significant financial backing.
  2. Whereas the journey across the Atlantic (and, eventually, the Pacific) could be measured by a single variable (latitude), the journey across the vast multidimensional landscape of artificial minds is much less predictable. That’s another reason to keep an open mind.
  3. Discovering an AUI could drastically transform the future of exploration in the landscape of artificial minds. Assisted by AUI, we might get to AGI much quicker than without it. Indeed, in some scenarios, it might take only a few months after we reach AUI for us (now going much faster than before) to reach AGI. Or days. Or hours.

Footnote

If you’re in or near Birmingham on 11th September, I’ll be giving a Funzing talk on how to assess the nature of the risks and opportunities from superhuman AI. For more details, see here.

 

30 January 2014

A brilliant example of communication about science and humanity

Mathematical Universe

Do you enjoy great detective puzzles? Do you like noticing small anomalies, and turning them into clues to an unexpected explanation? Do you like watching world-class scientists at work, piecing together insights to create new theories, and coping with disappointments when their theories appear to be disproved?

In the book “Our mathematical universe”, the mysteries being addressed are some of the very biggest imaginable:

  • What is everything made out of?
  • Where does the universe come from? For example, what made the Big Bang go “bang”?
  • What gives science its authority to speak with so much confidence about matters such as the age and size of the universe?
  • Is it true that the constants of nature appear remarkably “fine-tuned” so as to allow the emergence of life – in a way suggesting a miracle?
  • What does modern physics (including quantum mechanics) have to teach us about mind and consciousness?
  • What are the chances of other intelligent life existing in our galaxy (or even elsewhere in our universe)?
  • What lies in the future of the human race?

The author, Max Tegmark, is a Swedish-born professor of physics at MIT. He’s made a host of significant contributions to the development of cosmology – some of which you can read about in the book. But in his book, he also shows himself in my view to be a first class philosopher and a first class communicator.

Indeed, this may be the best book on the philosophy of physics that I have ever read. It also has important implications for the future of humanity.

There are some very big ideas in the book. It gives reasons for believing that our universe exists alongside no fewer than four different types of parallel universes. The “level 4 multiverse” is probably one of the grandest conceptions in all of philosophy. (What’s more, I’m inclined to think it’s the correct description of reality. At its heart, despite its grandness, it’s actually a very simple theory, which is a big plus in its favour.)

Much of the time, the writing in the book is accessible to people with pre-university level knowledge of science. On occasion, the going gets harder, but readers should be able to skip over these sections. I recommend reading the book all the way through, since the last chapter contains many profound ideas.

I think you’ll like this book if:

  • You have a fondness for pure mathematics
  • You recognise that the scientific explanation of phenomenon can be every bit as uplifting as pre-scientific supernatural explanations
  • You are ready to marvel at the ingenuity of scientific investigators going all the way back to the ancient Greeks (including those who first measured the distance from the Earth to the Sun)
  • You are critical of “quantum woo woo” hand-waving that says that quantum mechanics proves that consciousness is somehow a non-local agent (and that minds will survive bodily death)
  • You want to find more about Hugh Everett, the physicist who first proposed that “the quantum wave function never collapses”
  • You have a hunch that there’s a good answer to the question “why is there something rather than nothing?”
  • You want to see scientists in action, when they are confronted by evidence that their favoured theories are disproved by experiment
  • You’re ready to laugh at the misadventures that a modern cosmologist experiences (including eminent professors falling asleep in the audience of his lectures)
  • You’re interested in the considered viewpoint of a leading scientist about matters of human existential risk, including nuclear wars and the technological singularity.

Even more than all these good reasons, I highlight this book as an example of what the world badly needs: clear, engaging advocacy of the methods of science and reason, as opposed to mysticism and obscurantism.

Footnote: For my own views about the meaning of quantum mechanics, see my earlier blogpost “Schrödinger’s Rabbits”.

22 February 2013

Controversies over singularitarian utopianism

I shouldn’t have been surprised at the controversy that arose.

The cause was an hour-long lecture with 55 slides, ranging far and wide over a range of disruptive near-future scenarios, covering both upside and downside. The basic format of the lecture was: first the good news, and then the bad news. As stated on the opening slide,

Some illustrations of the enormous potential first, then some examples of how adding a high level of ambient stupidity might mean we might make a mess of it.

Ian PearsonThe speaker was Ian Pearson, described on his company website as “futurologist, conference speaker, regular media guest, strategist and writer”. The website continues, boldly,

Anyone can predict stuff, but only a few get it right…

Ian Pearson has been a full time futurologist since 1991, with a proven track record of over 85% accuracy at the 10 year horizon.

Ian was speaking, on my invitation, at the London Futurists last Saturday. His chosen topic was audacious in scope:

A Singularitarian Utopia Or A New Dark Age?

We’re all familiar with the idea of the singularity, the end-result of rapid acceleration of technology development caused by positive feedback. This will add greatly to human capability, not just via gadgets but also through direct body and mind enhancement, and we’ll mess a lot with other organisms and AIs too. So we’ll have superhumans and super AIs as part of our society.

But this new technology won’t bring a utopia. We all know that some powerful people, governments, companies and terrorists will also add lots of bad things to the mix. The same technology that lets you enhance your senses or expand your mind also allows greatly increased surveillance and control, eventually to the extremes of direct indoctrination and zombification. Taking the forces that already exist, of tribalism, political correctness, secrecy for them and exposure for us, and so on, it’s clear that the far future will be a weird mixture of fantastic capability, spoiled by abuse…

There were around 200 people in the audience, listening as Ian progressed through a series of increasingly mind-stretching technology opportunities. Judging by the comments posted online afterwards, some of the audience deeply appreciated what they heard:

Thank you for a terrific two hours, I have gone away full of ideas; I found the talk extremely interesting indeed…

I really enjoyed this provocative presentation…

Provocative and stimulating…

Very interesting. Thank you for organizing it!…

Amazing and fascinating!…

But not everyone was satisfied. Here’s an extract from one negative comment:

After the first half (a trippy sub-SciFi brainstorm session) my only question was, “What Are You On?”…

Another audience member wrote his own blogpost about the meeting:

A Singularitanian Utopia or a wasted afternoon?

…it was a warmed-over mish-mash of technological cornucopianism, seasoned with Daily Mail-style reactionary harrumphing about ‘political correctness gone mad’.

These are just the starters of negative feedback; I’ll get to others shortly. As I review what was said in the meeting, and look at the spirited ongoing exchange of comments online, some thoughts come to my mind:

  • Big ideas almost inevitably provoke big reactions; this talk had a lot of particularly big ideas
  • In some cases, the negative reactions to the talk arise from misunderstandings, due in part to so much material being covered in the presentation
  • In other cases, Isee the criticisms as reactions to the seeming over-confidence of the speaker (“…a proven track record of over 85% accuracy”)
  • In yet other cases, I share the negative reactions the talk generated; my own view of the near-future landscape significantly differs from the one presented on stage
  • In nearly all cases, it’s worth taking the time to progress the discussion further
  • After all, if we get our forecasts of the future wrong, and fail to make adequate preparations for the disruptions ahead, it could make a huge difference to our collective well-being.

So let’s look again at some of the adverse reactions. My aim is to raise them in a way that people who didn’t attend the talk should be able to follow the analysis.

(1) Is imminent transformation of much of human life a realistic scenario? Or are these ideas just science fiction?

NBIC SingularityThe main driver for belief in the possible imminent transformation of human life, enabled by rapidly changing technology, is the observation of progress towards “NBIC” convergence.

Significant improvements are taking place, almost daily, in our capabilities to understand and control atoms (Nano-tech), genes and other areas of life-sciences (Bio-tech), bits (Info-comms-tech), and neurons and other areas of mind (Cogno-tech). Importantly, improvements in these different fields are interacting with each other.

As Ian Pearson described the interactions:

  • Nanotech gives us tiny devices
  • Tiny sensors help neuroscience figure out how the mind works
  • Insights from neuroscience feed into machine intelligence
  • Improving machine intelligence accelerates R&D in every field
  • Biotech and IT advances make body and machine connectable

Will all the individual possible applications of NBIC convergence described by Ian happen in precisely the way he illustrated? Very probably not. The future’s not as predictable as that. But something similar could well happen:

  • Cheaper forms of energy
  • Tissue-cultured meat
  • Space exploration
  • Further miniaturisation of personal computing (wearable computing, and even “active skin”)
  • Smart glasses
  • Augmented reality displays
  • Gel computing
  • IQ and sensory enhancement
  • Dream linking
  • Human-machine convergence
  • Digital immortality: “the under 40s might live forever… but which body would you choose?”

(2) Is a focus on smart cosmetic technology an indulgent distraction from pressing environmental issues?

Here’s one of the comments raised online after the talk:

Unfortunately any respect due was undermined by his contempt for the massive environmental challenges we face.

Trivial contact lens / jewellery technology can hang itself, if our countryside is choked by yoghurt factory fumes.

The reference to jewellery took issue with remarks in the talk such as the following:

Miniaturisation will bring everyday IT down to jewellery size…

Decoration; Social status; Digital bubble; Tribal signalling…

In contrast, the talk positioned greater use of technology as the solution to environmental issues, rather than as something to exacerbate these issues. Smaller (jewellery-sized) devices, created with a greater attention to recyclability, will diminish the environmental footprint. Ian claimed that:

  • We can produce more of everything than people need
  • Improved global land management could feed up to 20 billion people
  • Clean water will be plentiful
  • We will also need less and waste less
  • Long term pollution will decline.

Nevertheless, he acknowledged that there are some short-term problems, ahead of the time when accelerating NBIC convergence can be expected to provide more comprehensive solutions:

  • Energy shortage is a short to mid term problem
  • Real problems are short term.

Where there’s room for real debate is the extent of these shorter-term problems. Discussion on the threats from global warming brought these disagreements into sharp focus.

(3) How should singularitarians regard the threat from global warming?

BalanceTowards the end of his talk, Ian showed a pair of scales, weighing up the wins and losses of NBIC technologies and a potential singularity.

The “wins” column included health, growth, wealth, fun, and empowerment.

The “losses” column included control, surveillance, oppression, directionless, and terrorism.

One of the first questions from the floor, during the Q&A period in the meeting, asked why the risk of environmental destruction was not on the list of possible future scenarios. This criticism was echoed by online comments:

The complacency about CO2 going into the atmosphere was scary…

If we risk heading towards an environmental abyss let’s do something about what we do know – fossil fuel burning.

During his talk, I picked up on one of Ian’s comments about not being particularly concerned about the risks of global warming. I asked, what about the risks of adverse positive feedback cycles, such as increasing temperatures triggering the release of vast ancient stores of methane gas from frozen tundra, accelerating the warming cycle further? That could lead to temperature increases that are much more rapid than presently contemplated, along with lots of savage disturbance (storms, droughts, etc).

Ian countered that it was a possibility, but he had the following reservations:

  • He thought these positive feedback loops would only kick into action when baseline temperature rose by around 2 degrees
  • In the meantime, global average temperatures have stopped rising, over the last eleven years
  • He estimates he spends a couple of hours every day, keeping an eye on all sides of the global warming debate
  • There are lots of exaggerations and poor science on both sides of the debate
  • Other factors such as the influence of solar cycles deserve more research.

Here’s my own reaction to these claims:

  • The view that global average temperatures  have stopped rising, is, among serious scientists, very much a minority position; see e.g. this rebuttal on Carbon Brief
  • Even if there’s only a small probability of a runaway spurt of accelerated global warming in the next 10-15 years, we need to treat that risk very seriously – in the same way that, for example, we would be loath to take a transatlantic flight if we were told there was a 5% chance of the airplane disintegrating mid-flight.

Nevertheless, I did not want the entire meeting to divert into a debate about global warming – “that deserves a full meeting in its own right”, I commented, before moving on to the next question. In retrospect, perhaps that was a mistake, since it may have caused some members of the audience to mentally disengage from the meeting.

(4) Are there distinct right-wing and left-wing approaches to the singularity?

Here’s another comment that was raised online after the talk:

I found the second half of the talk to be very disappointing and very right-wing.

And another:

Someone who lists ‘race equality’ as part of the trend towards ignorance has shown very clearly what wing he is on…

In the second half of his talk, Ian outlined changes in norms of beliefs and values. He talked about the growth of “religion substitutes” via a “random walk of values”:

  • Religious texts used to act as a fixed reference for ethical values
  • Secular society has no fixed reference point so values oscillate quickly.
  • 20 years can yield 180 degree shift
  • e.g. euthanasia, sexuality, abortion, animal rights, genetic modification, nuclear energy, family, policing, teaching, authority…
  • Pressure to conform reinforces relativism at the expense of intellectual rigour

A complicating factor here, Ian stated, was that

People have a strong need to feel they are ‘good’. Some of today’s ideological subscriptions are essentially secular substitutes for religion, and demand same suspension of free thinking and logical reasoning.

Knowledge GraphA few slides later, he listed examples of “the rise of nonsense beliefs”:

e.g. new age, alternative medicine, alternative science, 21st century piety, political correctness

He also commented that “99% are only well-informed on trivia”, such as fashion, celebrity, TV culture, sport, games, and chat virtual environments.

This analysis culminated with a slide that personally strongly resonated with me: a curve of “anti-knowledge” accelerating and overtaking a curve of “knowledge”:

In pursuit of social compliance, we are told to believe things that are known to be false.

With clever enough spin, people accept them and become worse than ignorant.

So there’s a kind of race between “knowledge” and “anti-knowledge”.

One reason this resonated with me is that it seemed like a different angle on one of my own favourite metaphors for the challenges of the next 15-30 years – the metaphor of a dramatic race:
Race

  • One runner in the race is “increasing rationality, innovation, and collaboration”; if this runner wins, the race ends in a positive singularity
  • The other runner in the race is “increasing complexity, rapidly diminishing resources”; if this runner wins, the race ends in a negative singularity.

In the light of Ian’s analysis, I can see that the second runner is aided by the increase of anti-knowledge: over-attachment to magical, simplistic, ultimately misleading worldviews.

However, it’s one thing to agree that “anti-knowledge” is a significant factor in determining the future; it’s another thing to agree which sets of ideas count as knowledge, and which as anti-knowledge! One of Ian’s slides included the following list of “religion substitutes”:

Animal rights, political correctness, pacifism, vegetarianism, fitness, warmism, environmentalism, anti-capitalism

It’s no wonder that many of the audience felt offended. Why list “warmism” (a belief in human-caused global warming), but not “denialism” (denial of human-caused global warming? Why list “anti-capitalism” but not “free market fundamentalism”? Why list “pacifism” but not “militarism”?

One online comment made a shrewd observation:

Ian raised my curiosity about ‘false beliefs’ (or nonsense beliefs as Ian calls them) as I ‘believe’ we all inhabit different belief systems – so what is true for one person may be false for another… at that exact moment in time.

And things can change. Once upon a time, it was a nonsense belief that the world was round.

There may be 15% of truth in some nonsense beliefs…or possibly even 85% truth. Taking ‘alternative medicine’ as an example of one of Ian’s nonsense beliefs – what if two of the many reasons it was considered nonsense were that (1) it is outside the world (the system) of science and technology and (2) it cannot be controlled by the pharmaceutical companies (perhaps our high priests of today)?

(5) The role of corporations and politicians in the approach to the singularity

One place where the right-wing / left-wing division becomes more acute in the question of whether anything special needs to be done to control the behaviour of corporations (businesses).

One of Ian’s strong positive recommendations, at the end of his presentation, was that scientists and engineers should become more actively involved in educating the general public about issues of technology. Shortly afterward, the question came from the floor: what about actions to educate or control corporations? Ian replied that he had very little to recommend to corporations, over and above his recommendations to the individuals within these corporations.

My own view is different. From my life inside industry, I’ve seen numerous cases of good people who are significantly constrained in their actions by the company systems and metrics in which they find themselves enmeshed.

Indeed, just as people should be alarmed about the prospects of super-AIs gaining too much power, over and above the humans who created them, we should also be alarmed about the powers that super-corporations are accumulating, over and above the powers and intentions of their employees.

The argument to leave corporations alone finds its roots in ideologies of freedom: government regulation of corporations often has undesirable side-effects. Nevertheless, that’s just an argument for being smarter and more effective in how the regulation works – not an argument to abstain from regulation altogether.

The question of the appropriate forms of collaborative governance remains one of the really hard issues facing anyone concerned about the future. Leaving corporations to find their own best solutions is, in my view, very unlikely to be the optimum approach.

In terms of how “laissez-faire” we should be, in the face of potential apocalypse down the road, I agree with the assessment near the end of Jeremy Green’s blogpost:

Pearson’s closing assertion that in the end our politicians will always wake up and pull us back from the brink of any disaster is belied by many examples of civilisations that did not pull back and went right over the edge to destruction.

Endnote:

After the presentation in Birkbeck College ended, around 40-50 of the audience regrouped in a nearby pub, to continue the discussion. The discussion is also continuing, at a different tempo, in the online pages of the London Futurists meetup. Ian Pearson deserves hearty congratulation for stirring up what has turned out to be an enlightening discussion – even though there’s heat in the comments as well as light!

Evidently, the discussion is far from complete…

20 February 2013

The world’s most eminent sociologist highlights the technological singularity

It’s not every day that the world’s most eminent sociologist reveals himself as having an intense interest in the Technological Singularity, and urges that “Everyone should read the books of Ray Kurzweil”. That’s what happened this evening.

The speaker in question was Lord Anthony Giddens, one of whose many claims to fame is his description as “Tony Blair’s guru”.

His biography states that, “According to Google Scholar, he is the most widely cited sociologist in the world today.”

In support of that claim, a 2009 article in the Times Higher Education supplement notes the following:

Giddens trumps Marx…

A list published today by Times Higher Education reveals the most-cited academic authors of books in the humanities…

As one of the world’s pre-eminent sociologists, Anthony Giddens, the Labour peer and former director of the London School of Economics, will be used to academic accolades.

But even he may be pleased to hear that his books are cited more often than those of iconic thinkers such as Sigmund Freud and Karl Marx.

Lord Giddens, now emeritus professor at LSE and a life fellow at King’s College, Cambridge, is the fifth most-referenced author of books in the humanities, according to the list produced by scientific data analysts Thomson Reuters.

The only living scholar ranked higher is Albert Bandura, the Canadian psychologist and pioneer of social learning theory at Stanford University…

Freud enters the list in 11th place. The American linguist and philosopher Noam Chomsky, who is based at the Massachusetts Institute of Technology and whose political books have a broader readership than some of his peers in the list, is 15th…

Lord Giddens is now 75 years old. Earlier this evening, I saw for myself evidence of his remarkable calibre. He gave an hour-long lecture in front of a packed audience at the London School of Economics, without any notes or slides, and without any hesitation, deviation, or verbal infelicity. Throughout, his remarks bristled with compelling ideas. He was equally competent – and equally fluent – when it came to the question-and-answer portion of the event.

LSE Events

The lecture was entitled “Off the edge of history: the world in the 21st century”. From its description on the LSE website, I had already identified it as relevant to many of the themes that I seek to have discussed in the series of London Futurists meetups that I chair:

The risks we face, and the opportunities we have, in the 21st century are in many respects quite different from those experienced in earlier periods of history. How should we analyse and respond to such a world? What is a rational balance of optimism and pessimism? How can we plan for a future that seems to elude our grasp and in some ways is imponderable?

As the lecture proceeded, I was very pleasantly impressed by the sequence of ideas. I append here a lightly edited copy of the verbatim notes I took on my Psion Series 5mx, supplemented by a few additions from the #LSEGiddens tweet stream. Added afterwards: the LSE has made a podcast available of the talk.

My rough notes from the talk follow… (text in italics are my parenthetical comments)

This large lecture room is completely full, twenty minutes before the lecture is due to start. I’m glad I arrived early!

Today’s topic is work in progress – he’s writing a book on the same topic, “Off the edge of history”.

  • Note this is a very different thesis from “the end of history”.

His starting point is in the subject of geology – a long way from sociology. He’s been working on climate change for the last seven years. It’s his first time to work so closely with scientists.

Geologists tend to call the present age “the Holocene age” – the last 12,000 years. But a geologist called Paul Crutzen recommended that we should use a different term for the last 200 years or so – we’re now in the Anthropocene age:

  • In this period, human activity strongly influences nature and the environment
  • This re-orients and restructures the world of geology
  • A great deal of what used to be natural, is natural no longer
  • Human beings are invading nature, in a way that has no precedent
  • Even some apparently natural catastrophes, like tsunamis and volcanoes, might be linked to impacts from humans.

We have continuities from previous history (of course), but so many things are different nowadays. One example is the impacts of new forms of biological threat. Disease organisms have skipped from animals to human beings. New disease organisms are being synthesised.

There are threats facing us, which are in no ways extensions of previous threats.

For example, what is the Internet doing to the world? Is it a gigantic new mind? Are you using the mobile phone, or is the mobile phone using you? There’s no parallel from previous periods. Globally connected electronic communications are fundamentally different from what went before.

When you are dealing with risks you’ve never experienced before, you can’t measure them. You’ll only know for sure when it’s too late. We’re on the edge of history because we are dealing with risks we have never faced before.

Just as we are invading nature, we are invading human nature in a way that’s unprecedented.

Do you know about the Singularity? (A smattering of people in the audience raise their hands.) It’s mind-blowing. You should find out about it:

  • It’s based on a mathematical concept
  • It’s accelerating processes of growth, rapidly disappearing to a far off point very different from today.

Everyone should read the books of Ray Kurzweil – who has recently become an Engineering Director at Google.

Kurzweil’s book makes it clear that:

  • Within our lifetimes, human beings will no longer be human beings
  • There are multiple accelerating rates of change in several different disciplines
  • The three main disciplines contributing to the singularity are nanotech, AI, and biotech
  • All are transforming our understanding of the human body and, more importantly, the human mind
  • This is described by the “Law of accelerating returns”
  • Progress is not just linear but geometrical.

This book opens our minds to multiple possibilities of what it means to be human, as technology penetrates us.

Nanotech is like humans playing God:

  • It’s a level below DNA
  • We can use it to rebuild many parts of the human body, and other artefacts in the world.

Kurzweil states that human beings will develop intelligence which is 100x higher than at present:

  • Because of merging of human bodies with computers
  • Because of the impact of nanotech.

Kurzweil gives this advice: if you are relatively young: live long, in order to live forever:

  • Immortality is no longer a religious concept, it’s now a tangible prospect
  • It could happen in the next 20-40 years.

This is a fantastic expansion of what it means to be human. Importantly, it’s a spread of opportunities and risk.

These were religious notions before. Now we have the real possibility of apocalypse – we’ve had it since the 1950s, when the first thermonuclear weapons were invented. The possibility of immortality has become real too.

We don’t know how to chart these possibilities. None of us know how to fill in that gap.

What science fiction writers were writing 20 years ago, is now in the newspapers everyday. Reading from the Guardian from a couple of days ago:

Paralysed people could get movement back through thought control

Brain implant could allow people to ‘feel’ the presence of infrared light and one day be used to move artificial limbs

Scientists have moved closer to allowing paralysed people to control artificial limbs with their thoughts following a breakthrough in technology…

…part of a series of sessions on advances in brain-machine interfaces, at which other scientists presented a bionic hand that could connect directly to the nerves in a person’s arm and provide sensory feedback of what they were holding.

Until now, neurological prosthetics have largely been demonstrated as a way to restore a loss of function. Last year, a 58-year-old woman who had become paralysed after a stroke demonstrated that she could use a robotic arm to bring a cup of coffee to her mouth and take a sip, just by thinking about it…

In the future…  it might be possible to use prosthetic devices to restore vision – for example, if a person’s visual cortex had been damaged – by training a different part of the brain to process the information.

Or you could even augment normal brain function in non-invasive ways to deliver the information.

We could learn to detect other sorts of signals that we normally don’t see or experience; the perceptual range could increase.

These things are real; these things are happening. There is a kind of geometric advance.

The literature of social scientists has a big division here, between doomsday thinkers and optimists, with respected thinkers in both camps.

Sir Martin Rees is example of first category. He wrote a book called “Our final century”:

  • It examines forms of risk that could destroy our society
  • Climate change is a huge existential risk – most people aren’t aware of it
  • Nanotech is another existential risk – grey goo scenario
  • We also have lots of weaponry: drones circulating above the world even as we speak
  • Most previous civilisations have ended in disaster – they subverted themselves
  • For the first time, we have a civilisation on a global scale
  • It could well be our final century.

Optimists include Matt Ridley, a businessman turned scientist, and author of the book “The rational optimist”:

  • Over the course of human civilisation there is progress – including progress in culture, and medical advances.

This is a big division. How do we sort this out? His view: it’s not possible to decide. We need to recognise that we live in a “high opportunity, high risk society”:

  • The level of opportunity and level of risk are both much higher than before
  • But risk and opportunity always intertwine
  • “In every risk there’s an opportunity…” and vice versa
  • We must be aware of the twists and tangles of risk and opportunity – their interpenetration.

Studying this area has led him to change some of his views from before:

  • He now sees the goal of sustainability as a harder thing than before
  • Living within our limits makes sense, but we no longer know what our limits are
  • We have to respect limits, but also recognise that limits can be changed.

For example, could we regard a world population of 9 billion people as an opportunity, rather than just a risk?

  • It would lead us to put lots more focus on food innovation, blue sky tech for agriculture, social reform, etc – all good things.

A few points to help us sort things out:

  1. One must never avoid risk – we live in a world subject to extreme system risk; we mustn’t live in denial of risk in our personal life (like denying the risks of smoking or riding motor cycles) or at an civilisational level
  2. We have to think about the future in a very different way, because the future has become opaque to us; the enlightenment thought was that we would march in and make sense of history (Marx had similar thoughts), but it turns out that the future is actually opaque – for our personal lives too as well as society (he wonders whether the EU will still exist by the time he finishes his book on the future of the EU!)
  3. We’ll have to learn to backcast rather than forecast – to borrow an idea from the study of climate change. We have to think ahead, and then think back.

This project is the grand task of social sciences in the 21st century.

One more example: the possibility of re-shoring of jobs in the US and EU:

  • 3D printing is an unbelievable technological invention
  • 3D printers can already print shoes
  • A printer in an MIT lab can print whole systems – eg in due course a plane which will fly directly out of the computer
  • This will likely produce a revolution in manufacturing – many, many implications.

Final rhetorical question: As we confront this world, should we be pessimists or optimists? This is the same question he used to consider, at the end of the talks he used to give on climate change.

His answer: we should bracket out that opposition; it’s much more important to be rational than either pessimist or optimist:

  • Compare the case of someone with very serious cancer – they need more than wishful thinking. Need rational underpinning of optimism and/or pessimism.

Resounding applause from the audience. Then commence questions and answers.

Q: Are today’s governance structures, at local and national levels, fit to deal with these issues?

A: No. For example, the he European Union has proved not to be the vanguard of global governance that we hoped it would be. Climate change is another clear example: twenty years of UN meetings with no useful outcome whatsoever.

Q: Are our human cognitive powers capable to deal with these problems? Is there a role for technology to assist our cognitive powers?

A: Our human powers are facing a pretty difficult challenge. It’s human nature to put off what we don’t have to do today. Like 16 years taking up smoking who can’t really see themselves being 40. Maybe a supermind might be more effective.

Q: Although he has given examples where current governance models are failing, are there any bright spots of hope for governance? (The questioner in this case was me.)

A: There are some hopeful signs for economic governance. Surely bankers will not get away with what they’ve done. Movement to address tax havens (“onslaught”) – bring the money back as well as bringing the jobs back. Will require global co-operation. Nuclear proliferation (Iran, Israel) is as dangerous as climate change. The international community has done quite well with non-proliferation, but it only takes one nuclear war for things to go terribly wrong.

Q: What practical advice would he give to the Prime Minister (or to Ed Miliband)?

A: He supports Ed Miliband trying to restructure capitalism; there are similar moves happening in the US too. However, with global issues like these, any individual prime minister is limited in his influence. For better or for worse, Ray Kurzweil has more influence than any politician!

(Which is a remarkable thing to say, for someone who used to work so closely with Prime Minister Tony Blair…)

16 June 2012

Beyond future shock

Filed under: alienation, books, change, chaos, futurist, Humanity Plus, rejuveneering, robots, Singularity, UKH+ — David Wood @ 3:10 pm

They predicted the “electronic frontier” of the Internet, Prozac, YouTube, cloning, home-schooling, the self-induced paralysis of too many choices, instant celebrities, and the end of blue-collar manufacturing. Not bad for 1970.

That’s the summary, with the benefit of four decades of hindsight, given by Fast Company writer Greg Lindsay, of the forecasts made in the 1970 bestseller “Future Shock” by husband-and-wife authors Alvin and Heidi Toffler.

As Lindsay comments,

Published in 1970, Future Shock made its author Alvin Toffler – a former student radical, welder, newspaper report and Fortune editor – a household name. Written with his wife (and uncredited co-author), Heidi Toffler, the book was The World Is Flat of its day, selling 6 million copies and single-handedly inventing futurism…

“Future shock is the shattering stress and disorientation that we induce in individuals by subjecting them to too much change in too short a time”, the pair wrote.

And quoting Deborah Westphal, the managing partner of Toffler Associates, in an interview at an event marking the 40th anniversary of the publication of Future Shock, Lindsay notes the following:

In Future Shock, the Tofflers hammered home the point that technology, culture, and even life itself was evolving too fast for governments, policy-makers and regulators to keep up. Forty years on, that message hasn’t changed. “The government needs to understand the dependencies and the convergence of networks through information,” says Westphal. “And there still needs to be some studies done around rates of change and the synchronization of these systems. Business, government, and organizational structures need to be looked at and redone. We’ve built much of the world economy on an industrial model, and that model doesn’t work in an information-centric society. That’s probably the greatest challenge we still face -understanding the old rules don’t apply for the future.”

Earlier this week, another book was published, that also draws on Future Shock for inspiration.  Again, the authors are a husband-and-wife team, Parag and Ayesha Khanna.  And again, the book looks set to redefine key aspects of the futurist endeavour.

This new book is entitled “Hybrid Reality: Thriving in the Emerging Human-Technology Civilization“.  The Khannas refer early on to the insights expressed by the Tofflers in Future Shock:

The Tofflers’ most fundamental insight was that the pace of change has become as important as the content of change… The term Future Shock was thus meant to capture our intense anxiety in the face of technology’s seeming ability to accelerate time. In this sense, technology’s true impact isn’t just physical or economic, but social and psychological as well.

One simple but important example follows:

Technologies such as mobile phones can make us feel empowered, but also make us vulnerable to new pathologies like nomophobia – the fear of being away from one’s mobile phone. Fifty-eight percent of millennials would rather give up their sense of smell than their mobile phone.

As befits the theme of speed, the book is a fast read. I downloaded it onto my Kindle on the day of its publication, and have already read it all the way through twice. It’s short, but condensed. The text contains many striking turns of phrase, loaded with several layers of meaning, which repay several rethinks. That’s the best kind of sound-bite.

Despite its short length, there are too many big themes in the book for me to properly summarise them here. The book portrays an optimistic vision, alongside a series of challenges and risks. As illustrations, let me pick out a selection of phrases, to convey some of the flavour:

The cross-pollination of leading-edge sectors such as information technology, biotechnology, pervasive computing, robotics, neuroscience, and nanotechnology spells the end of certain turf wars over nomenclature. It is neither the “Bio Age” nor the “Nano Age” nor the “Neuro Age”, but the hybrid of all of these at the same time…

Our own relationship to technology is moving beyond the instrumental to the existential. There is an accelerating centripetal dance between what technologies are doing outside us and inside us. Externally, technology no longer simply processes our instructions on a one-way street. Instead, it increasingly provides intelligent feedback. Internally, we are moving beyond using technology only to dominate nature towards making ourselves the template for technology, integrating technologies within ourselves physically. We don’t just use technology; we absorb it

The Hybrid Age is the transition period between the Information Age and the moment of Singularity (when machine surpass human intelligence) that inventor Ray Kurzweil estimates we may reach by 2040 (perhaps sooner). The Hybrid Age is a liminal phase in which we cross the threshold toward a new mode of arranging global society…

You may continue to live your life without understanding the implications of the still-distant Singularity, but you should not underestimate how quickly we are accelerating into the Hybrid Age – nor delay in managing this transition yourself

The dominant paradigm to explain global change in the Hybrid Age will be geotechnnology. Technology’s role in shaping and reshaping the prevailing order, and accelerating change between orders, forces us to rethink the intellectual hegemony of geopolitics and geoeconomics…

It is geotechnology that is the underlying driver of both: Mastery in the leading technology sectors of any era determines who leads in geoeconomics and dominates in geopolitics…

The shift towards a geotechnology paradigm forces us to jettison centuries of foundational assumptions of geopolitics. The first is our view on scale: “Bigger is better” is no longer necessarily true. Size can be as much a liability as an asset…

We live and die by our Technik, the capacity to harness emerging technologies to improve our circumstances…

We will increasingly differentiate societies on the basis not of their regime type or income, but of their capacity to harness technology. Societies that continuously upgrade their Technik will thrive…

Meeting the grand challenge of improving equity on a crowded planet requires spreading Technik more than it requires spreading democracy

And there’s lots more, applying the above themes to education, healthcare, “better than new” prosthetics, longevity and rejuvenation, 3D printing, digital currencies, personal entrepreneurship and workforce transformation, the diffusion of authority, the rise of smart cities and their empowered “city-zens”, augmented reality and enhanced personal avatars, robots and “avoiding robopocalypse”, and the prospect for a forthcoming “Pax Technologica”.

It makes me breathless just remembering all these themes – and how they time and again circle back on each other.

Footnote: Readers who are in the vicinity of London next Saturday (23rd June) are encouraged to attend the London Futurist / Humanity+ UK event “Hybrid Reality, with Ayesha Khanna”. Click on the links for more information.

3 June 2012

Super-technology and a possible renaissance of religion

Filed under: death, disruption, Humanity Plus, rejuveneering, religion, Singularity, UKH+ — David Wood @ 11:02 pm

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke

Imagine that the human race avoids self-destruction and continues on the path of increased mastery of technology. Imagine that, as seems credible some time in the future, humans will eventually gain the ability to keep everyone alive indefinitely, in an environment of great abundance, variety, and  intrinsic interest.

That paradise may be a fine outcome for our descendants, but unless the pace of technology improvement becomes remarkably rapid, it seems to have little direct impact on our own lives. Or does it?

It may depend on exactly how much power our god-like descendants eventually acquire.  For example, here are two of the points from a radical vision of the future known as the Ten cosmist convictions:

  • 5) We will develop spacetime engineering and scientific “future magic” much beyond our current understanding and imagination.
  • 6) Spacetime engineering and future magic will permit achieving, by scientific means, most of the promises of religions — and many amazing things that no human religion ever dreamed. Eventually we will be able to resurrect the dead by “copying them to the future”.

Whoa! “Resurrect the dead”, by “copying them to the future”. How might that work?

In part, by collecting enormous amount of data about the past – reconstructing information from numerous sources. It’s similar to collecting data about far-distant stars using a very large array of radio telescopes. And in part, by re-embodying that data in a new environment, similar to copying running software onto a new computer, giving it a new lease of life.

Lots of questions can be asked about the details:

  • Can sufficient data really be gathered in the future, in the face of all the degradation commonly called “the second law of thermodynamics”, that would allow a sufficiently high-fidelity version of me (or anyone else) to be re-created?
  • If a future super-human collected lots of data about me and managed to get an embodiment of that data running on some future super-computer, would that really amount to resurrecting me, as opposed to creating a copy of me?

I don’t think anyone can confident about answers to such questions. But it’s at least conceivable that remarkably advanced technology of the future may allow positive answers.

In other words, it’s at least conceivable that our descendants will have the god-like ability to recreate us in the future, giving us an unexpected prospect for immortality.

This makes sense of the remark by radical futurist and singularitarian Ray Kurzweil at the end of the film “Transcendent Man“:

Does God exist? Well I would say, not yet

Other radical futurists quibble over the “not yet” caveat. In his recent essay “Yes, I am a believer“, Giulio Prisco takes the discussion one stage further:

Gods will exist in the future, and they may be able to affect their past — our present — by means of spacetime engineering. Probably other civilizations out there already attained God-like powers.

Giulio notes that even the celebrated critic of theism, Richard Dawkins, gives some support to this line of thinking.  For example, here’s an excerpt from a 2011 New York Times interview, in which Dawkins discusses an essay written by theoretic physicist Freeman Dyson:

In one essay, Professor Dyson casts millions of speculative years into the future. Our galaxy is dying and humans have evolved into something like bolts of superpowerful intelligent and moral energy.

Doesn’t that description sound an awful lot like God?

“Certainly,” Professor Dawkins replies. “It’s highly plausible that in the universe there are God-like creatures.”

He raises his hand, just in case a reader thinks he’s gone around a religious bend. “It’s very important to understand that these Gods came into being by an explicable scientific progression of incremental evolution.”

Could they be immortal? The professor shrugs.

“Probably not.” He smiles and adds, “But I wouldn’t want to be too dogmatic about that.”

As Giulio points out, Dawkins develops a similar line of argument in part of his book “The God Delusion”:

Whether we ever get to know them or not, there are very probably alien civilizations that are superhuman, to the point of being god-like in ways that exceed anything a theologian could possibly imagine. Their technical achievements would seem as supernatural to us as ours would seem to a Dark Age peasant transported to the twenty-first century…

In what sense, then, would the most advanced SETI aliens not be gods? In what sense would they be superhuman but not supernatural? In a very important sense, which goes to the heart of this book. The crucial difference between gods and god-like extraterrestrials lies not in their properties but in their provenance. Entities that are complex enough to be intelligent are products of an evolutionary process. No matter how god-like they may seem when we encounter them, they didn’t start that way…

Giulio seems more interested in the properties than the provenance. The fact that these entities have god-like powers prompts him to proclaim “Yes, I am a believer“.  He gives another reason in support of that proclamation: In contrast to the views of so-called militant atheists, Giulio is “persuaded that religion can be a powerful and positive force”.

Giulio sees this “powerful and positive force” as applying to him personally as well as to groups in general:

“In my beliefs I find hope, happiness, meaning, the strength to get through the night, and a powerful sense of wonder at our future adventures out there in the universe, which gives me also the drive to try to be a better person here-and-now on this little planet and make it a little better for future generations”.

More controversially, Giulio has taken to describing himself (e.g. on his Facebook page) as a “Christian”. Referring back to his essay, and to the ensuing online discussion:

Religion can, and should, be based on mutual tolerance, love and compassion. Jesus said: “love thy neighbor as thyself,” and added: “let he who is without sin, cast the first stone”…

This is the important part of his teachings in my opinion. Christian theology is interesting, but I think it should be reformulated for our times…

Was Jesus the Son of God? I don’t think this is a central issue. He certainly was, in the sense that we all are, and he may have been one of those persons in tune with the universe, more in tune with the universe than the rest of us, able to glimpse at veiled realities beyond our senses.

I’ve known Giulio for several years, from various Humanity+ and Singularity meetings we’ve both attended – dating back to “Transvision 2006” in Helsinki. I respect him as a very capable thinker, and I take his views seriously. His recent “Yes, I am a believer” article has stirred up a hornets’ nest of online criticism.

Accordingly, I was very pleased that Giulio accepted my invitation to come to London to speak at a London Futurist / Humanity+ UK meeting on Saturday 14th July: “Transhumanist Religions 2.0: New Cosmist religion and spirituality for our boundless future (and our troubled present)”. For all kinds of reason, this discussion deserves a wider airing.

First, I share the view that religious sentiments can provide cohesion and energy to propel individuals and groups to undertake enormously difficult projects (such as the project to avoid the self-destruction of the human race, or any drastic decline in the quality of global civilisation).  The best analysis I’ve read of this point is in the book “Darwin’s Cathedral: Evolution, Religion, and the Nature of Society” by David Sloan Wilson.  As I’ve written previously:

This book has sweeping scope, but makes its case very well.  The case is that religion has in general survived inasmuch as it helped groups of people to achieve greater cohesion and thereby acquire greater fitness compared to other groups of people.  This kind of religion has practical effect, independent of whether or not its belief system corresponds to factual reality.  (It can hardly be denied that, in most cases, the belief system does not correspond to factual reality.)

The book has some great examples – from the religions in hunter-gatherer societies, which contain a powerful emphasis on sharing out scarce resources completely equitably, through examples of religions in more complex societies.  The chapter on John Calvin was eye-opening (describing how his belief system brought stability and prosperity to Geneva) – as were the sections on the comparative evolutionary successes of Judaism and early Christianity.  But perhaps the section on the Balinese water-irrigation religion is the most fascinating of the lot.

Of course, there are some other theories for why religion exists (and is so widespread), and this book gives credit to these theories in appropriate places.  However, this pro-group selection explanation has never before been set out so carefully and credibly, and I think it’s no longer possible to deny that it plays a key role.

The discussion makes it crystal clear why many religious groups tend to treat outsiders so badly (despite treating insiders so well).  It also provides a fascinating perspective on the whole topic of “forgiveness”.  Finally, the central theme of “group selection” is given a convincing defence.

But second, there’s no doubt that religion can fit blinkers over people’s thinking abilities, and prevent them from weighing up arguments dispassionately. Whenever people talk about the Singularity movement as having the shape of a religion – with Ray Kurzweil as a kind of infallible prophet – I shudder. But we needn’t lurch to that extreme. We should be able to maintain the discipline of rigorous independent thinking within a technologically-informed renaissance of positive religious sentiment.

Third, if the universe really does have beings with God-like powers, what attitude should we adopt towards these beings? Should we be seeking in some way to worship them, or placate them, or influence them? It depends on whether these beings are able to influence human history, here and now, or whether they are instead restricted (by raw facts of space and time that even God-like beings have to respect) to observing us and (possibly) copying us into the future.

Personally my bet is on the latter choice. For example, I’m not convinced by people who claim evidence to the contrary. And if these beings did have the ability to intervene in human history, but have failed to do so, it would be evidence of them having scant interest in widespread intense human suffering. They would hardly be super-beings.

In that case, the focus of our effort should remain squarely on building the right conditions for super-technology to benefit humanity as a whole (this is the project I call “Inner Humanity+“), rather than on somehow seeking to attract the future attention of these God-like beings. But no doubt others will have different views!

16 October 2011

Human regeneration – limbs and more

Filed under: healthcare, medicine, rejuveneering, risks, Singularity — David Wood @ 1:57 am

Out of the many interesting presentations on Day One of the 2011 Singularity Summit here in New York, the one that left me with the most to think about was “Regenerative Medicine: Possibilities and Potential” by Dr. Stephen Badylak.

Dr Badylak is deputy director of the McGowan Institute for Regenerative Medicine, and a Professor in the Department of Surgery at the University of Pittsburg. In his talk at the Singularity Summit, he described some remarkable ways in which the human body could heal itself – provided we provide it with suitable “scaffolding” that triggers the healing.

One of the examples Dr Badylak discussed is also covered in a recent article in Discover Magazine, How Pig Guts Became the Next Bright Hope for Regenerating Human Limbs.  The article deserves reading all the way through. Here are some short extracts from the beginning:

When he first arrived in the trauma unit of San Antonio’s Brooke Army Medical Center in December 2004, Corporal Isaias Hernandez’s leg looked to him like something from KFC. “You know, like when you take a bite out of the drumstick down to the bone?” Hernandez recalls. The 19-year-old Marine, deployed in Iraq, had been trying to outfit his convoy truck with a makeshift entertainment system for a long road trip when the bomb exploded. The 12-inch TV he was clutching to his chest shielded his vital organs; his buddy carrying the DVDs wasn’t so lucky.

The doctors kept telling Hernandez he would be better off with an amputation. He would have more mobility with a prosthetic, less pain. When he refused, they took a piece of muscle from his back and sewed it into the hole in his thigh. He did all he could to make it work. He grunted and sweated his way through the agony of physical therapy with the same red-faced determination that got him through boot camp. He even sneaked out to the stairwell, something they said his body couldn’t handle, and dragged himself up the steps until his leg seized up and he collapsed.

Generally people never recovered from wounds like his. Flying debris had ripped off nearly 70 percent of Hernandez’s right thigh muscle, and he had lost half his leg strength. Remove enough of any muscle and you might as well lose the whole limb, the chances of regeneration are so remote. The body kicks into survival mode, pastes the wound over with scar tissue, and leaves you to limp along for life….

Hernandez recalled that one of his own doctors—Steven Wolf, then chief clinical researcher for the United States Army Institute of Surgical Research in Texas—had once mentioned some kind of experimental treatment that could “fertilize” a wound and help it heal. At the time, Hernandez had dismissed the therapy as too extreme. The muscle transplant sounded safer, easier. Now he changed his mind. He wanted his leg back, even if it meant signing himself up as a guinea pig for the U.S. Army.

So Hernandez tracked down Wolf, and in February 2008 the two got started. First, Wolf put Hernandez through another grueling course of physical therapy to make sure he had indeed pushed any new muscle growth to the limit. Then he cut open Hernandez’s thigh and inserted a paper-thin slice of the same material used to make the pixie dust: part of a pig’s bladder known as the extracellular matrix, or ECM, a fibrous substance that occupies the spaces between cells. Once thought to be a simple cellular shock absorber, ECM is now understood to contain powerful proteins that can reawaken the body’s latent ability to regenerate tissue.

A few months after the surgery healed, Wolf assigned the young soldier another course of punishing physical therapy. Soon something remarkable began to happen. Muscle that most scientists would describe as gone forever began to grow back. Hernandez’s muscle strength increased by 30 percent from what it was before the surgery, and then by 40 percent. It hit 80 percent after six months. Today it is at 103 percent—as strong as his other leg. Hernandez can do things that were impossible before, like ease gently into a chair instead of dropping into it, or kneel down, ride a bike, and climb stairs without collapsing, all without pain

The challenge now is replicating Hernandez’s success in other patients. The U.S. Department of Defense, which received a congressional windfall of $80 million to research regenerative medicine in 2008, is funding a team of scientists based at the University of Pittsburgh’s McGowan Institute for Regenerative Medicine to oversee an 80-patient study of ECM at five institutions. The scientists will attempt to use the material to regenerate the muscle of patients who have lost at least 40 percent of a particular muscle group, an amount so devastating to limb function that it often leads doctors to perform an amputation.

If the trials are successful, they could fundamentally change the way we treat patients with catastrophic limb injuries. Indeed, the treatment might someday allow patients to regrow missing or mangled body parts. With an estimated 1.7 million people in the United States alone missing limbs, promoters of regenerative medicine eagerly await the day when therapies like ECM work well enough to put the prosthetics industry out of business.

The interesting science is the explanation of the role of the ECM – the extracellular matrix, which provides the scaffolding that allows the healing to take place. The healing turns out to involve the body directing stem cells to the scaffolding. These stem cells then differentiate into muscle cells, nerve cells, blood cells, and so on. There’s also some interesting science to explain why the body doesn’t reject the ECM that’s inserted into it.

Badylak speaks with confidence of the treatment one day allowing the regeneration of damaged human limbs, akin to what happens with salamanders.  He also anticipates the healing of brain tissue damaged by strokes.

Later that morning, another speaker at the Singularity Summit, Michael Shermer, referred to Dr Badylak’s presentation. Shermer is a well-known sceptic – indeed, he’s the publisher of Skeptic magazine.  Shermer often participates in public debates with believers in various religions and new-age causes.  Shermer mentioned that, at these debates, his scientific open mindedness is sometimes challenged.  “OK, if you are open-minded, as you claim, what evidence would make you believe in God?”  Shermer typically gives the answer that, if someone with an amputated limb were to have that limb regrow, that would be reason for him to become a believer:

Most religious claims are testable, such as prayer positively influencing healing. In this case, controlled experiments to date show no difference between prayed-for and not-prayed-for patients. And beyond such controlled research, why does God only seem to heal illnesses that often go away on their own? What would compel me to believe would be something unequivocal, such as if an amputee grew a new limb. Amphibians can do it. Surely an omnipotent deity could do it. Many Iraqi War vets eagerly await divine action.

However, Shermer joked with the Singularity Summit audience, it now appears that Dr Badylak might be God.  The audience laughed.

But there’s a serious point at stake here. The Singularity Summit is full of talks about humans being on the point of gaining powers that, in previous ages, would have been viewed as Divine. With great power comes great responsibility. As veteran ecologist and environmentalist Stewart Brand wrote at the very start of his recent book “Whole Earth Discipline“,

We are as gods and HAVE to get good at it.

In the final talk of the day, cosmologist Professor Max Tegmark addressed the same theme.  He gave an estimate of “between 1/10 and 1/10,000” for the probability of human extinction during any decade in the near-term future – extinction arising from (for example) biochemical warfare, runaway global warming, nanotech pollution, or a bad super-intelligence singularity. In contrast, he said, only a tiny fraction of the global GDP is devoted to management of existential risks.  That kind of “lack of paying attention” meant that humanity deserved, in Tegmark’s view, a “mid-term rating” of just D-.  Our focus, far too much of the time, is on the next election cycle, or the next quarterly financial results, or other short term questions.

One person who is seeking to encourage greater attention to be paid to existential risks is co-founder of Skype, Jaan Tallinn (who earlier in the year gave a very fine talk at a Humanity+ event I organised in London).  Jaan’s main presentation at the 2011 Singularity Summit will be on Day Two, but he briefly popped up on stage on Day One to announce a significant new fundraising commitment: he will personally match any donations made over the weekend to the Singularity Institute, up to a total of $100,000.

With the right resources, wisely deployed, we ought to see collective human intelligence achieve lots more regeneration – not just of broken limbs, but also of troubled societies and frustrated lives – whilst at the same time steering humanity away from the existential risks latent in these super-powerful technologies.  The discussion will continue tomorrow.

Older Posts »

Blog at WordPress.com.