20 February 2013

The world’s most eminent sociologist highlights the technological singularity

It’s not every day that the world’s most eminent sociologist reveals himself as having an intense interest in the Technological Singularity, and urges that “Everyone should read the books of Ray Kurzweil”. That’s what happened this evening.

The speaker in question was Lord Anthony Giddens, one of whose many claims to fame is his description as “Tony Blair’s guru”.

His biography states that, “According to Google Scholar, he is the most widely cited sociologist in the world today.”

In support of that claim, a 2009 article in the Times Higher Education supplement notes the following:

Giddens trumps Marx…

A list published today by Times Higher Education reveals the most-cited academic authors of books in the humanities…

As one of the world’s pre-eminent sociologists, Anthony Giddens, the Labour peer and former director of the London School of Economics, will be used to academic accolades.

But even he may be pleased to hear that his books are cited more often than those of iconic thinkers such as Sigmund Freud and Karl Marx.

Lord Giddens, now emeritus professor at LSE and a life fellow at King’s College, Cambridge, is the fifth most-referenced author of books in the humanities, according to the list produced by scientific data analysts Thomson Reuters.

The only living scholar ranked higher is Albert Bandura, the Canadian psychologist and pioneer of social learning theory at Stanford University…

Freud enters the list in 11th place. The American linguist and philosopher Noam Chomsky, who is based at the Massachusetts Institute of Technology and whose political books have a broader readership than some of his peers in the list, is 15th…

Lord Giddens is now 75 years old. Earlier this evening, I saw for myself evidence of his remarkable calibre. He gave an hour-long lecture in front of a packed audience at the London School of Economics, without any notes or slides, and without any hesitation, deviation, or verbal infelicity. Throughout, his remarks bristled with compelling ideas. He was equally competent – and equally fluent – when it came to the question-and-answer portion of the event.

LSE Events

The lecture was entitled “Off the edge of history: the world in the 21st century”. From its description on the LSE website, I had already identified it as relevant to many of the themes that I seek to have discussed in the series of London Futurists meetups that I chair:

The risks we face, and the opportunities we have, in the 21st century are in many respects quite different from those experienced in earlier periods of history. How should we analyse and respond to such a world? What is a rational balance of optimism and pessimism? How can we plan for a future that seems to elude our grasp and in some ways is imponderable?

As the lecture proceeded, I was very pleasantly impressed by the sequence of ideas. I append here a lightly edited copy of the verbatim notes I took on my Psion Series 5mx, supplemented by a few additions from the #LSEGiddens tweet stream. Added afterwards: the LSE has made a podcast available of the talk.

My rough notes from the talk follow… (text in italics are my parenthetical comments)

This large lecture room is completely full, twenty minutes before the lecture is due to start. I’m glad I arrived early!

Today’s topic is work in progress – he’s writing a book on the same topic, “Off the edge of history”.

  • Note this is a very different thesis from “the end of history”.

His starting point is in the subject of geology – a long way from sociology. He’s been working on climate change for the last seven years. It’s his first time to work so closely with scientists.

Geologists tend to call the present age “the Holocene age” – the last 12,000 years. But a geologist called Paul Crutzen recommended that we should use a different term for the last 200 years or so – we’re now in the Anthropocene age:

  • In this period, human activity strongly influences nature and the environment
  • This re-orients and restructures the world of geology
  • A great deal of what used to be natural, is natural no longer
  • Human beings are invading nature, in a way that has no precedent
  • Even some apparently natural catastrophes, like tsunamis and volcanoes, might be linked to impacts from humans.

We have continuities from previous history (of course), but so many things are different nowadays. One example is the impacts of new forms of biological threat. Disease organisms have skipped from animals to human beings. New disease organisms are being synthesised.

There are threats facing us, which are in no ways extensions of previous threats.

For example, what is the Internet doing to the world? Is it a gigantic new mind? Are you using the mobile phone, or is the mobile phone using you? There’s no parallel from previous periods. Globally connected electronic communications are fundamentally different from what went before.

When you are dealing with risks you’ve never experienced before, you can’t measure them. You’ll only know for sure when it’s too late. We’re on the edge of history because we are dealing with risks we have never faced before.

Just as we are invading nature, we are invading human nature in a way that’s unprecedented.

Do you know about the Singularity? (A smattering of people in the audience raise their hands.) It’s mind-blowing. You should find out about it:

  • It’s based on a mathematical concept
  • It’s accelerating processes of growth, rapidly disappearing to a far off point very different from today.

Everyone should read the books of Ray Kurzweil – who has recently become an Engineering Director at Google.

Kurzweil’s book makes it clear that:

  • Within our lifetimes, human beings will no longer be human beings
  • There are multiple accelerating rates of change in several different disciplines
  • The three main disciplines contributing to the singularity are nanotech, AI, and biotech
  • All are transforming our understanding of the human body and, more importantly, the human mind
  • This is described by the “Law of accelerating returns”
  • Progress is not just linear but geometrical.

This book opens our minds to multiple possibilities of what it means to be human, as technology penetrates us.

Nanotech is like humans playing God:

  • It’s a level below DNA
  • We can use it to rebuild many parts of the human body, and other artefacts in the world.

Kurzweil states that human beings will develop intelligence which is 100x higher than at present:

  • Because of merging of human bodies with computers
  • Because of the impact of nanotech.

Kurzweil gives this advice: if you are relatively young: live long, in order to live forever:

  • Immortality is no longer a religious concept, it’s now a tangible prospect
  • It could happen in the next 20-40 years.

This is a fantastic expansion of what it means to be human. Importantly, it’s a spread of opportunities and risk.

These were religious notions before. Now we have the real possibility of apocalypse – we’ve had it since the 1950s, when the first thermonuclear weapons were invented. The possibility of immortality has become real too.

We don’t know how to chart these possibilities. None of us know how to fill in that gap.

What science fiction writers were writing 20 years ago, is now in the newspapers everyday. Reading from the Guardian from a couple of days ago:

Paralysed people could get movement back through thought control

Brain implant could allow people to ‘feel’ the presence of infrared light and one day be used to move artificial limbs

Scientists have moved closer to allowing paralysed people to control artificial limbs with their thoughts following a breakthrough in technology…

…part of a series of sessions on advances in brain-machine interfaces, at which other scientists presented a bionic hand that could connect directly to the nerves in a person’s arm and provide sensory feedback of what they were holding.

Until now, neurological prosthetics have largely been demonstrated as a way to restore a loss of function. Last year, a 58-year-old woman who had become paralysed after a stroke demonstrated that she could use a robotic arm to bring a cup of coffee to her mouth and take a sip, just by thinking about it…

In the future…  it might be possible to use prosthetic devices to restore vision – for example, if a person’s visual cortex had been damaged – by training a different part of the brain to process the information.

Or you could even augment normal brain function in non-invasive ways to deliver the information.

We could learn to detect other sorts of signals that we normally don’t see or experience; the perceptual range could increase.

These things are real; these things are happening. There is a kind of geometric advance.

The literature of social scientists has a big division here, between doomsday thinkers and optimists, with respected thinkers in both camps.

Sir Martin Rees is example of first category. He wrote a book called “Our final century”:

  • It examines forms of risk that could destroy our society
  • Climate change is a huge existential risk – most people aren’t aware of it
  • Nanotech is another existential risk – grey goo scenario
  • We also have lots of weaponry: drones circulating above the world even as we speak
  • Most previous civilisations have ended in disaster – they subverted themselves
  • For the first time, we have a civilisation on a global scale
  • It could well be our final century.

Optimists include Matt Ridley, a businessman turned scientist, and author of the book “The rational optimist”:

  • Over the course of human civilisation there is progress – including progress in culture, and medical advances.

This is a big division. How do we sort this out? His view: it’s not possible to decide. We need to recognise that we live in a “high opportunity, high risk society”:

  • The level of opportunity and level of risk are both much higher than before
  • But risk and opportunity always intertwine
  • “In every risk there’s an opportunity…” and vice versa
  • We must be aware of the twists and tangles of risk and opportunity – their interpenetration.

Studying this area has led him to change some of his views from before:

  • He now sees the goal of sustainability as a harder thing than before
  • Living within our limits makes sense, but we no longer know what our limits are
  • We have to respect limits, but also recognise that limits can be changed.

For example, could we regard a world population of 9 billion people as an opportunity, rather than just a risk?

  • It would lead us to put lots more focus on food innovation, blue sky tech for agriculture, social reform, etc – all good things.

A few points to help us sort things out:

  1. One must never avoid risk – we live in a world subject to extreme system risk; we mustn’t live in denial of risk in our personal life (like denying the risks of smoking or riding motor cycles) or at an civilisational level
  2. We have to think about the future in a very different way, because the future has become opaque to us; the enlightenment thought was that we would march in and make sense of history (Marx had similar thoughts), but it turns out that the future is actually opaque – for our personal lives too as well as society (he wonders whether the EU will still exist by the time he finishes his book on the future of the EU!)
  3. We’ll have to learn to backcast rather than forecast – to borrow an idea from the study of climate change. We have to think ahead, and then think back.

This project is the grand task of social sciences in the 21st century.

One more example: the possibility of re-shoring of jobs in the US and EU:

  • 3D printing is an unbelievable technological invention
  • 3D printers can already print shoes
  • A printer in an MIT lab can print whole systems – eg in due course a plane which will fly directly out of the computer
  • This will likely produce a revolution in manufacturing – many, many implications.

Final rhetorical question: As we confront this world, should we be pessimists or optimists? This is the same question he used to consider, at the end of the talks he used to give on climate change.

His answer: we should bracket out that opposition; it’s much more important to be rational than either pessimist or optimist:

  • Compare the case of someone with very serious cancer – they need more than wishful thinking. Need rational underpinning of optimism and/or pessimism.

Resounding applause from the audience. Then commence questions and answers.

Q: Are today’s governance structures, at local and national levels, fit to deal with these issues?

A: No. For example, the he European Union has proved not to be the vanguard of global governance that we hoped it would be. Climate change is another clear example: twenty years of UN meetings with no useful outcome whatsoever.

Q: Are our human cognitive powers capable to deal with these problems? Is there a role for technology to assist our cognitive powers?

A: Our human powers are facing a pretty difficult challenge. It’s human nature to put off what we don’t have to do today. Like 16 years taking up smoking who can’t really see themselves being 40. Maybe a supermind might be more effective.

Q: Although he has given examples where current governance models are failing, are there any bright spots of hope for governance? (The questioner in this case was me.)

A: There are some hopeful signs for economic governance. Surely bankers will not get away with what they’ve done. Movement to address tax havens (“onslaught”) – bring the money back as well as bringing the jobs back. Will require global co-operation. Nuclear proliferation (Iran, Israel) is as dangerous as climate change. The international community has done quite well with non-proliferation, but it only takes one nuclear war for things to go terribly wrong.

Q: What practical advice would he give to the Prime Minister (or to Ed Miliband)?

A: He supports Ed Miliband trying to restructure capitalism; there are similar moves happening in the US too. However, with global issues like these, any individual prime minister is limited in his influence. For better or for worse, Ray Kurzweil has more influence than any politician!

(Which is a remarkable thing to say, for someone who used to work so closely with Prime Minister Tony Blair…)

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

19 March 2011

A singularly fine singularitarian panel?

Filed under: futurist, Humanity Plus, Kurzweil, Singularity — David Wood @ 12:37 pm

In a moment, I’ll get to the topic of a panel discussion on the Singularity – a panel I’ve dubbed (for reasons which should become clear) “Post Transcendent Man“. It’s a great bunch of speakers, and I’m expecting an intellectual and emotional mindfest.  But first, some background.

In the relatively near future, I expect increasing numbers of people to navigate the sea change described recently by writer Philippe Verdoux in his article Transhumanists coming out of the closet:

It wasn’t that long ago that listing transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on, as interests on one’s CV might result in a bit of embarrassment.

Over just the past decade and a half, though, there seems to have been a sea change in how these issues are perceived by philosophers and others: many now see them as legitimate subjects of research; they have, indeed, acquired a kind of academic respectability that they didn’t previously possess.

There are no doubt many factors behind this shift. For one, it seems to be increasingly apparent, in 2011, that technology and biology are coming together to form a new kind of cybernetic unity, and furthermore that such technologies can be used to positively enhance (rather than merely alter) features of our minds and bodies.

In other words, the claim that humans can “transcend” (a word I don’t much like, by the way) our biological limitations through the use of enhancement technologies seems to be increasingly plausible – that is, empirically speaking.

Thus, it seems to be a truism about our contemporary world that technology will, in the relatively near future, enable us to alter ourselves in rather significant ways. This is one reason, I believe, that more philosophers are taking transhumanism seriously…

On a personal note, when I first discovered transhumanism, I was extremely skeptical about its claims (which, by the way, I think every good scientific thinker should be). I take it that transhumanism makes two claims in particular, the first “descriptive” and the second “normative”: (i) that future technologies will make it possible for us to radically transform the human organism, potentially enabling us to create a new species of technologized “posthumans”; and (ii) that such a future scenario is preferable to all other possible scenarios. In a phrase: we not only can but ought to pursue a future marked by posthumanity…

One factor that leads people to pay more serious attention to this bundle of ideas – transhumanism, human enhancement, the Singularity, technology-driven evolution, existential risks, and so on – is the increasing coverage of these ideas in thoughtful articles in the mainstream media.  In turn, many of these articles have been triggered by the film Transcendent Man by director Barry Ptolemy, featuring the groundbreaking but controversial ideas and projects of inventor and futurist Ray Kurzweil.  Here’s a trailer for the film:

The film has received interesting commentary in, among other places:

I had mixed views when watching the movie myself:

  • On the one hand, it contains a large number of profound sound bites – statements made by many of the talking heads on screen; any of these sound bites could, potentially, change someone’s life, if they reflect on the implications;
  • The film also covers many details of Kurzweil’s own biography, with archive footage of him at different stages of his career – this filled in many gaps in my own understanding, and gave me renewed respect for what he has accomplished as a professional;
  • On the other hand, although there are plenty of critical comments among the sound bites – comments highlighting potential problems or issues with Kurzweil’s ideas – the film never really lets the debate fly;
  • I found myself thinking – yes, that’s an interesting and important point, now let’s explore this further – but then the movie switched to a different frame.

The movie has its official UK premier at the London Science Museum on Tuesday 5th April.  Kurzweil himself will be in attendance, to answer questions raised by the audience.  The last time I checked, tickets were sold out.

Post Transcendent Man

To drill down more deeply into the potentially radical implications of Kurzweil’s ideas and projects, the UK chapter of Humanity+ has arranged an event in  Birkbeck College (WC1E 7HX), Torrington Square in Central London on the afternoon (2pm-4.15pm) of Saturday 9th April.  We’ll be in Malet Street lecture room B34 – which seats a capacity audience of 177 people.  For more details about logistics, registration, and so on, see the official event website, or the associated Facebook page.

The event is privileged to feature an outstanding set of speakers and panellists who represent a range of viewpoints about the Singularity, transhumanism, and human transcendence.  In alphabetical order by first name:

Dr Anders Sandberg is a James Martin research fellow at the Future of Humanity Institute at Oxford University. As a part of the Oxford Martin School he is involved in interdisciplinary research on cognitive enhancement, neurotechnology, global catastrophic risks, emerging technologies and applied rationality. He has been writing about and debating transhumanism, future studies, neuroethics and related questions for a long time. He is also an associate of the Oxford Centre for Neuroethics and the Uehiro Centre for Practical Ethics, as well as co-founder of the Swedish think tank Eudoxa.

Jaan Tallinn is one of the programmers behind Kazaa and a founding engineer of Skype. He is also a partner in Ambient Sound Investments as well as a member of the Estonian President’s Academic Advisory Board. He describes himself as singularitarian/hacker/investor/physicist (in that order). In recent years Jaan has found himself closely following and occasionally supporting the work that SIAI and FHI are doing. He agrees with Kurzweil in that the topic of Singularity can be extremely counterintuitive to general public, and has tried to address this problem in a few public presentations at various venues.

Nic Brisbourne is a partner at venture capital fund DFJ Esprit and blogger on technology and startup issues at The Equity Kicker. As such he’s interested in when technology and science projects become products and businesses. He has a personal interest in Kurzweil’s ideas and longevity in particular and he says he’s keen to cross the gap from personal to professional and find exciting startups generating products in this area, although he thinks that the bulk of the commercialisation opportunities are still a year or two out.

Paul Graham Raven is a writer, literary critic and bootstrap big-picture futurist; he prods regularly at the fuzzy boundary of the unevenly-distributed future at futurismic.com. He is Editor-in-Chief and Publisher of The Dreaded Press, a rock music reviews webzine, and Publicist and PR officer for PS Publishing – perhaps the UK’s foremost boutique genre publisher. He says he’s also a freelance web-dev to the publishing industry, a cack-handed fuzz-rock guitarist, and in need of a proper haircut.

Russell Buckley is a leading practitioner, speaker and thinker about mobile and mobile marketing. MobHappy, his blog about mobile technology, is one of the most established focusing on this area. He is also a previous Global Chairman of the Mobile Marketing Association, a founder of Mobile Monday in Germany and holds numerous non-executive positions in mobile technology companies. Russell learned about mobile advertising startup, AdMob, soon after its launch, and joined as its first employee in 2006, with the remit of launching AdMob into the EMEA market. Four years later, AdMob was sold to Google for $750m. By night though, Russell is fascinated by the socio-political implications of technology and recently graduated from the Executive Program at the Singularity University, founded by Ray Kurtzweil and Peter Diamandis to “educate and inspire leaders who strive to understand and facilitate the development of exponentially advancing technologies in order to address humanity’s grand challenges”.

The discussion continues

The event will start, at 2pm, with the panellists introducing themselves, and their core thinking about the topics under discussion.  As chair, I’ll ask a few questions, and then we’ll open up for questions and comments from the audience.  I’ll be particularly interested to explore:

  • How people see the ideas of accelerating technology making a difference in their own lives – both personally or professionally.  Three of us on the stage were on founding teams of companies that made sizeable waves in the technology world (Jaan Tallinn, Skype; Russell Buckley, AdMob; myself, Symbian).  Where do we see rapidly evolving technology (as often covered by Kurzweil) taking us next?
  • People’s own experiences with bodies such as the Singularity University, the Singularity Institute, and the Future of Humanity Institute at Oxford University.  Are these bodies just talking shops?  Are they grounded in reality?  Are they making a substantial positive difference in how humanity responds to the issues and challenges of technology?
  • Views as to the best way to communicate ideas like the Singularity – favourite films, science fiction, music, and other media.  How does the move “Transcendent Man” compare?
  • Reservations and worries (if any) about the Singularity movement and the ways in which Kurzweil expresses his ideas.  Are the parallels with apocalyptic religions too close for comfort?
  • Individuals’ hopes and aspirations for the future of technology.  What role do they personally envision playing in the years ahead?  And what timescales do they see as credible?
  • Calls to action – what (if anything) should members of the audience change about their lives, in the light of analysing technology trends?

Which questions do you think are the most important to raise?

Request for help

If you think this is an important event, I have a couple of suggestions for you:

The discussion continues (more)

Dean Bubley, founder of Disruptive Analysis and a colleague of mine from the mobile telecomms industry, has organised the “Inaugural UK Humanity+ Evening Salon” on Wednesday April 13th, from 7pm to 10pm.  Dean describes it as follows:

Interested in an evening discussing the future of the human species & society? Aided by a drink or two?

This is the first “salon” event for the London branch of “Humanity Plus”, or H+ for short. It’s going to be an informal evening event involving a stimulating guest speaker, Q&A and lively discussion, all aided by a couple of drinks. It fits alongside UKH+’s larger Saturday afternoon lecture sessions, and occasional all-day major conferences…

It will be held in central London, in a venue TBC closer to the time. Please contact Dean Bubley (facebook.com/bubley), the convener & moderator, for more details.

For more details, see the corresponding Facebook page, and RSVP there so that Dean has an idea of the likely numbers.

11 September 2010

No escape from technology

Filed under: books, evolution, Kurzweil, UKH+ — David Wood @ 1:51 am

We can never escape the bio-technological nexus and get “back to nature” – because we have never lived in nature.

That sentence, from the final chapter of Timothy Taylor’s “The Artificial Ape: How technology changed the course of human evolution“, sums up one of my key takeaways from this fine book.

It’s a book that’s not afraid to criticise giants.  Aspects of Charles Darwin’s thinking are examined and found wanting.  Modern day technology visionary Ray Kurzweil also comes under criticism:

The claims of Ray Kurzweil (that we are approaching a critical moment when biology will be overtaken by artificial constructs) … lack a critical historical – and prehistoric – perspective…

Kurzweil argues that the age of machines is upon us …  and that technology is reaching a point where it can innovate itself, producing ever more complex forms of artificial intelligence.  My argument in this book is that, scary or not, none of this is new.  Not only have we invented technology, from the stone tools to the wheeled wagon, from spectacles to genetic engineering, but that technology, within a framework of some 2 to 3 million years, has, physically and mentally, made us.

Taylor’s book portrays the emergence of humanity as a grand puzzle.  From a narrow evolutionary perspective, humans should not have come into existence.  Our heads are too large. In many cases, they’re too large to pass through the narrow gap in their mother’s pelvis.  Theory suggests, and fossils confirm, that the prehistoric change from walking on all fours to walking upright had the effect of narrowing this gap in the pelvis.  The resulting evolutionary pressures should have resulted in smaller brains.  Yet, after several eons, the brain, instead, became larger and larger.

That’s just the start of the paradox.  The human baby is astonishingly vulnerable.  Worse, it makes its mother increasingly vulnerable too.  How could “survival of the fittest” select this ridiculously unfit outcome?

Of course, a larger brain has survival upsides as well as survival downsides.  It enables greater sociality, and the creation of sophisticated tools, including weapons.  But Taylor marshalls evidence that suggests that the first use of tools by pre-humans long pre-dated the growth in head size.  This leads to the suggestion that two tools, in particular, played vital roles in enabling the emergence of the larger brain:

  • The invention of a slings, made from fur, that enabled mothers to carry their infants hands-free
  • The invention of cooking, with fire, that made it easier for nourishment to be quickly obtained from food.

To briefly elaborate the second point: walking upright means the digestive gut becomes compressed.  It becomes shorter.  There’s less time for nourishment to be extracted from food.  Moreover, a larger head increases the requirements for fast delivery of nourishment.  Again, from a narrow evolutionary point of view, the emergence of big-brained humans makes little sense.  But cooking comes to the rescue.  Cooking, along with the child-carrying sling, are two examples of technology that enable the emergence of humans.

The resulting creatures – us – are weaker in a pure biological sense that our evolutionary forebears.  Without our technological aides, we would fare poorly in any contest of survival with other apes.  It is only the combination of technology-plus-nature that makes us stronger.

We’re used to thinking that the development of tools took place in parallel with increasing pre-human intelligence.  Taylor’s argument is that, in a significant way, the former preceded the latter.  Without the technology, the pre-human brain could not expand.

The book uses this kind of thinking to address various other puzzles:

  • For example, the technology-impoverished natives from the tip of South America that Darwin met on his voyage of discovery on the Beagle, had eyesight that was far better than even the keenest eyed sailor on the ship.  Technological progress went hand-in-hand with a weakening of biological power.
  • Taylor considers the case of the aborigines of Tasmania, who were technologically backward compared to those of mainland Australia: they lacked all clothing, and apparently could not make fire for themselves.  The archeological record indicates that the Tasmanian aborigines actually lost the use of various technologies over the course of several millenia.  Taylor reaches a different conclusion from popular writer Jared Diamond, who seems to take it for granted that this loss of technology made the aborigines weaker.  Taylor suggests that, in many ways, these aborigines became stronger and fitter, in their given environment, as they abandoned their clothing and their fishing tools.

There are many other examples – but I’ll leave it to you to read the book to find out more.  The book also has some fascinating examples of ancient tools.

I think that Taylor’s modifications of Darwin’s ideas are probably right.  What of his modifications of Kurzweil’s ideas?  Is the technological spurt of the present day really “nothing new”?  Well, yes and no.  I believe Kurzweil is correct to point out that the kinds of changes that are likely to be enabled by technology in the relatively near future – perhaps in the lifetime of many people who are already alive – are qualitatively different from anything that has gone before:

  • Technology might extend our lifespans, not just by a percentage, but by orders of magnitude (perhaps indefinitely)
  • Technology might create artificial intelligences that are orders of magnitude more powerful than any intelligence that has existed on this planet so far.

As I’ve already mentioned in my previous blogpost – which I wrote before starting to read Taylor’s book – Timothy Taylor is the guest speaker at the September meeting of the UK chapter of Humanity+.  People who attend will have the chance to hear more details of these provocative theories, and to query them direct with the author.  There will also be an opportunity to purchase signed copies of his book.  I hope to see some of you there!

I’ll give the last words to Dr Taylor:

Technology, especially the baby-carrying sling, allowed us to push back our biological limits, trading in our physical strength for an increasingly retained infantile early helplessness that allowed our brains to expand, forming themselves under increasingly complex artificial conditions…  In terms of brain growth, the high-water mark was passed some 40,000 years ago.  The pressure on that organ has been off ever since we started outsourcing intelligence in the form of external symbolic storage.  That is now so sophisticated through the new world information networking systems that what will emerge in future may no longer be controlled by our own volition…

[Technology] could also destroy our planet.  But there is no back-to-nature solution.  There never has been for the artificial ape.

27 February 2010

Achieving a 130-fold improvement in 40 years

Filed under: books, Economics, green, Kurzweil, RSA, solar energy, sustainability — David Wood @ 3:23 pm

One reason I like London so much is the quality of debate and discussion that takes place, at least three times most weeks, at the RSA.

The full name of this organisation is “the Royal Society for the encouragement of Arts, Manufactures and Commerce“.  It’s been holding meetings since 1754.  Early participants included Benjamin Franklin, Samuel Johnson, and Richard Arkwright.

Recently, there have been several RSA meetings addressing the need for significant reform of how the global economy operates.  Otherwise, these speakers imply, the future will be much bleaker than the present.

On Wednesday, Professor Tim Jackson of the University of Surrey led a debate on the question “Is Prosperity Without Growth Possible?”  Professor Jackson recently authored the book “Prosperity Without Growth: Economics for a Finite Planet“.  The book contains an extended version of his remarks at the debate.

I find myself in agreement a great deal of what the book says:

  • Continuous economic growth is a shallow and, by itself, dangerous goal;
  • Beyond an initial level, greater wealth has only a weak correlation with greater prosperity;
  • Greater affluence can bring malaise – especially in countries with significant internal inequalities;
  • Consumers frequently find themselves spending money they don’t have, to buy new goods they don’t really need;
  • The recent economic crisis provides us with an important opportunity to reflect on the operation of economics;
  • “Business as usual” is not a sustainable answer;
  • There is an imperative to consider whether society can operate without its existing commitment to regular GDP growth.

What makes this book stand out is its recognition of the enormous practical problems in stopping growth.  Both growth and de-growth face significant perils.  As the start of chapter 12 of the book states:

Society is faced with a profound dilemma.  To resist growth is to risk economic and social collapse.  To pursue it relentlessly is to endanger the ecosystems on which we depend for long-term survival.

For the most part, this dilemma goes unrecognised in mainstream policy…  When reality begins to impinge on the collective consciousness, the best suggestion to hand is that we can somehow ‘decouple‘ growth from its material impacts…

The sheer scale of this task is rarely acknowledged.  In a world of 9 billion people all aspiring to western lifestyles, the carbon intensity of every dollar of output must be at least 130 times lower in 2050 than it it today…

Never mind that no-one knows what such an economy looks like.  Never mind that decoupling isn’t happening on anything like that scale.  Never mind that all our institutions and incentive structures continually point in the wrong direction.  The dilemma, once recognised, looms so dangerously over our future that we are desperate to believe in miracles.  Technology will save us.  Capitalism is good at technology…

This delusional strategy has reached its limits.  Simplistic assumptions that capitalism’s propensity for efficiency will stabilise the climate and solve the problem of resource scarcity are almost literally bankrupt.  We now stand in urgent need of a clearer vision, braver policy-making, something more robust in the way of a strategy with which to confront the dilemma of growth.

The starting point must be to unravel the forces that keep us in damaging denial.  Nature and structure conspire together here.  The profit motive stimulates a continual search for newer, better or cheaper products and services.  Our own relentless search for novelty and social status locks us into an iron cage of consumerism.  Affluence itself has betrayed us.

Affluence breeds – and indeed relies on – the continual production and reproduction of consumer novelty.  But relentless novelty reinforces anxiety and weakens our ability to protect long-term social goals.  In doing so it ends up undermining our own well-being and the well-being of those around us.  Somewhere along the way, we lose the shared prosperity we sought int he first place.

None of this is inevitable.  We can’t change ecological limits.  We can’t alter human nature.  But we can and do create and recreate the social world. Its norms are our norms.  Its visions are our visions.  Its structures and institutions shape and are shaped by those norms and visions.  This is where transformation is needed…

As I said, I find myself in agreement a great deal of what the book says.  The questions raised in the book deserve a wide hearing.  Society needs higher overarching goals than merely increasing our GDP.  Society needs to focus on new priorities, which take into account the finite nature of the resources available to us, and the risks of imminent additional ecological and economic disaster.

However, I confess to being one of the people who believe (with some caveats…) that “technology will save us“.  Let’s look again at this figure of a 130-fold descrease needed, between now and 2050.

The figure of 130 comes from a calculation in chapter 5 of the book.  I have no quibble with the figure.  It comes from the Paul Ehrlich equation

I = P * A * T


  • I is the impact on the environment resulting from consumption
  • P is the population
  • A is the consumption or income level per capita (affluence)
  • T is the technological intensity of economic output.

Jackson’s book considers various scenarios.  Scenario 4 assumes a global population of 9 billion by 2050, all enjoying a lifestyle equivalent to that of the average EU citizen, which has grown by the modest amount of only 2% per annum over the intervening 40 years.  To bring down today’s I level for carbon intensity of economic level, to that seen by the IPCC as required to avoid catastrophic climate change, will require a 130-fold reduction in T in the meantime.

How feasible is an improvement factor of 130 in technology, over the next 40 years?  How good is the track record of technology at solving this kind of problem?

Some of the other speakers at the RSA event were hesitant to make any predictions for a 40 year time period.  They noted that history has a habit of making this kind of prediction irrelevant.  Jackson’s answer is that since we have little confidence of making a significant change in T, we should look to ways to reduce A.  Jackson is also worried that recent talk of a ‘Green New Deal’:

  • Is still couched in language of economic growth, rather than improvement in prosperity;
  • Has seen little translation into action, since first raised during 2008-9.

My own answer is that 130 represents just over 7 doublings (2 raised to the 7th power is 128) and that at least some parts of technology have no problems in improving by seven doubling generations over 40 years.  Indeed, taking two years as the usual Moore’s Law doubling period, for improvements in semiconductor density, would require only 14 years for this kind of improvement, rather than 40.

To consider how Moore’s Law improvements could transform the energy business, radically reducing its carbon intensity, here are some remarks by futurist Ray Kurzweil, as reported by LiveScience Senior Editor Robin Lloyd:

Futurist and inventor Ray Kurzweil is part of distinguished panel of engineers that says solar power will scale up to produce all the energy needs of Earth’s people in 20 years.

There is 10,000 times more sunlight than we need to meet 100 percent of our energy needs, he says, and the technology needed for collecting and storing it is about to emerge as the field of solar energy is going to advance exponentially in accordance with Kurzweil’s Law of Accelerating Returns. That law yields a doubling of price performance in information technologies every year.

Kurzweil, author of “The Singularity Is Near” and “The Age of Intelligent Machines,” worked on the solar energy solution with Google Co-Founder Larry Page as part of a panel of experts convened by the National Association of Engineers to address the 14 “grand challenges of the 21st century,” including making solar energy more economical. The panel’s findings were announced here last week at the annual meeting of the American Association for the Advancement of Science.

Solar and wind power currently supply about 1 percent of the world’s energy needs, Kurzweil said, but advances in technology are about to expand with the introduction of nano-engineered materials for solar panels, making them far more efficient, lighter and easier to install. Google has invested substantially in companies pioneering these approaches.

Regardless of any one technology, members of the panel are “confident that we are not that far away from a tipping point where energy from solar will be [economically] competitive with fossil fuels,” Kurzweil said, adding that it could happen within five years.

The reason why solar energy technologies will advance exponentially, Kurzweil said, is because it is an “information technology” (one for which we can measure the information content), and thereby subject to the Law of Accelerating Returns.

“We also see an exponential progression in the use of solar energy,” he said. “It is doubling now every two years. Doubling every two years means multiplying by 1,000 in 20 years. At that rate we’ll meet 100 percent of our energy needs in 20 years.”

Other technologies that will help are solar concentrators made of parabolic mirrors that focus very large areas of sunlight onto a small collector or a small efficient steam turbine. The energy can be stored using nano-engineered fuel cells, Kurzweil said.

“You could, for example, create hydrogen or hydrogen-based fuels from the energy produced by solar panels and then use that to create fuel for fuel cells”, he said. “There are already nano-engineered fuel cells, microscopic in size, that can be scaled up to store huge quantities of energy”, he said…

To be clear, I don’t see any of this as inevitable.  The economy as a whole could falter again, jeopardising “Kurzweil’s Law of Accelerating Returns”.  Less dramatically, Moore’s Law could run out of steam, or it might prove harder than expected to apply silicon improvements in systems for generating, storing, and transporting energy.  I therefore share Professor Jackson’s warning that capitalism, by itself, cannot be trusted to get the best out of technology.  That’s why this debate is particularly important.

22 November 2009

Timescales for Human Body Version 2.0

Filed under: aging, Kurzweil, nanotechnology — David Wood @ 7:21 pm

In the coming decades, a radical upgrading of our body’s physical and mental systems, already underway, will use nanobots to augment and ultimately replace our organs. We already know how to prevent most degenerative disease through nutrition and supplementation; this will be a bridge to the emerging biotechnology revolution, which in turn will be a bridge to the nanotechnology revolution. By 2030, reverse-engineering of the human brain will have been completed and nonbiological intelligence will merge with our biological brains.

The paragraph above is the abstract for the chapter by Ray Kurzweil in the book “The Scientific Conquest of Death“.  In that chapter, Ray sets out a vision for a route to indefinite human lifespans.

Here are a few highlights from the essay:

It’s All About Nanobots

In a famous scene from the movie, The Graduate, Benjamin’s mentor gives him career advice in a single word: “plastics.”  Today, that word might be “software,” or “biotechnology,” but in another couple of decades, the word is likely to be “nanobots.”  Nanobots—blood-cell-sized robots—will provide the means to radically redesign our digestive systems, and, incidentally, just about everything else.

In an intermediate phase, nanobots in the digestive tract and bloodstream will intelligently extract the precise nutrients we need, call for needed additional nutrients and supplements through our personal wireless local area network, and send the rest of the food we eat on its way to be passed through for elimination.

If this seems futuristic, keep in mind that intelligent machines are already making their way into our blood stream.  There are dozens of projects underway to create blood-stream-based “biological microelectromechanical systems” (bioMEMS) with a wide range of diagnostic and therapeutic applications.  BioMEMS devices are being designed to intelligently scout out pathogens and deliver medications in very precise ways…

A key question in designing this technology will be the means by which these nanobots make their way in and out of the body.  As I mentioned above, the technologies we have today, such as intravenous catheters, leave much to be desired.  A significant benefit of nanobot technology is that unlike mere drugs and nutritional supplements, nanobots have a measure of intelligence.  They can keep track of their own inventories, and intelligently slip in and out of our bodies in clever ways.  One scenario is that we would wear a special “nutrient garment” such as a belt or undershirt.  This garment would be loaded with nutrient bearing nanobots, which would make their way in and out of our bodies through the skin or other body cavities.

At this stage of technological development, we will be able to eat whatever we want, whatever gives us pleasure and gastronomic fulfillment, and thereby unreservedly explore the culinary arts for their tastes, textures, and aromas.  At the same time, we will provide an optimal flow of nutrients to our bloodstream, using a completely separate process.  One possibility would be that all the food we eat would pass through a digestive tract that is now disconnected from any possible absorption into the bloodstream.


This would place a burden on our colon and bowel functions, so a more refined approach will dispense with the function of elimination.  We will be able to accomplish this using special elimination nanobots that act like tiny garbage compactors.  As the nutrient nanobots make their way from the nutrient garment into our bodies, the elimination nanobots will go the other way.  Periodically, we would replace the nutrition garment for a fresh one.  One might comment that we do obtain some pleasure from the elimination function, but I suspect that most people would be happy to do without it.

Ultimately we won’t need to bother with special garments or explicit nutritional resources.  Just as computation will eventually be ubiquitous and available everywhere, so too will basic metabolic nanobot resources be embedded everywhere in our environment.  In addition, an important aspect of this system will be maintaining ample reserves of all needed resources inside the body.  Our version 1.0 bodies do this to only a very limited extent, for example, storing a few minutes of oxygen in our blood, and a few days of caloric energy in glycogen and other reserves.  Version 2.0 will provide substantially greater reserves, enabling us to be separated from metabolic resources for greatly extended periods of time.

Once perfected, we will no longer need version 1.0 of our digestive system at all.  I pointed out above that our adoption of these technologies will be cautious and incremental, so we will not dispense with the old-fashioned digestive process when these technologies are first introduced.  Most of us will wait for digestive system version 2.1 or even 2.2 before being willing to do dispense with version 1.0.  After all, people didn’t throw away their typewriters when the first generation of word processors was introduced.  People held onto their vinyl record collections for many years after CDs came out (I still have mine).  People are still holding onto their film cameras, although the tide is rapidly turning in favor of digital cameras.

However, these new technologies do ultimately dominate, and few people today still own a typewriter.  The same phenomenon will happen with our reengineered bodies.  Once we’ve worked out the inevitable complications that will arise with a radically reengineered gastrointestinal system, we will begin to rely on it more and more.

Programmable Blood

As we reverse-engineer (learn the principles of operation of) our various bodily systems, we will be in a position to engineer new systems that provide dramatic improvements.  One pervasive system that has already been the subject of a comprehensive conceptual redesign is our blood…

I’ve personally watched (through a microscope) my own white blood cells surround and devour a pathogen, and I was struck with the remarkable sluggishness of this natural process.  Although replacing our blood with billions of nanorobotic devices will require a lengthy process of development, refinement, and regulatory approval, we already have the conceptual knowledge to engineer substantial improvements over the remarkable but very inefficient methods used in our biological bodies…

Have a Heart, or Not

The next organ on my hit list is the heart.  It’s a remarkable machine, but it has a number of severe problems.  It is subject to a myriad of failure modes, and represents a fundamental weakness in our potential longevity.  The heart usually breaks down long before the rest of the body, and often very prematurely.

Although artificial hearts are beginning to work, a more effective approach will be to get rid of the heart altogether.  Designs include nanorobotic blood cell replacements that provide their own mobility.  If the blood system moves with its own movement, the engineering issues of the extreme pressures required for centralized pumping can be eliminated.  As we perfect the means of transferring nanobots to and from the blood supply, we can also continuously replace the nanobots comprising our blood supply…

So What’s Left?

Let’s consider where we are.  We’ve eliminated the heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all the hormone-producing organs, kidneys, bladder, liver, lower esophagus, stomach, small intestines, large intestines, and bowel.  What we have left at this point is the skeleton, skin, sex organs, mouth and upper esophagus, and brain…

Redesigning the Human Brain

The process of reverse engineering and redesign will also encompass the most important system in our bodies: the brain.  The brain is at least as complex as all the other organs put together, with approximately half of our genetic code devoted to its design.  It is a misconception to regard the brain as a single organ.  It is actually an intricate collection of information-processing organs, interconnected in an elaborate hierarchy, as is the accident of our evolutionary history.

The process of understanding the principles of operation of the human brain is already well under way.  The underlying technologies of brain scanning and neuron modeling are scaling up exponentially, as is our overall knowledge of human brain function.  We already have detailed mathematical models of a couple dozen of the several hundred regions that comprise the human brain.

The age of neural implants is also well under way.  We have brain implants based on “neuromorphic” modeling (i.e., reverse-engineering of the human brain and nervous system) for a rapidly growing list of brain regions.  A friend of mine who became deaf while an adult can now engage in telephone conversations again because of his cochlear implant, a device that interfaces directly with the auditory nervous system.  He plans to replace it with a new model with a thousand levels of frequency discrimination, which will enable him to hear music once again.  He laments that he has had the same melodies playing in his head for the past 15 years and is looking forward to hearing some new tunes.  A future generation of cochlear implants now on the drawing board will provide levels of frequency discrimination that go significantly beyond that of “normal” hearing…

And the essay continues.  It’s well worth reading in its entirety.  A short websearch finds a slightly longer version of the same essay online, on Kurzweil’s own website, along with a conceptual illustration by media artist and philosopher Natasha Vita-More:

Evaluating the vision: the questions

Three main questions arise in response to this vision of “Human Body Version 2.0”:

  1. Is the vision technologically feasible?
  2. Is the vision morally attractive?
  3. Within what timescales might the vision become feasible?

Progress: encouraging, but not rocket-paced

A recent article in the New Scientist, Medibots: The world’s smallest surgeons, takes up the theme of nanobots with medical usage, and reports on some specific progress:

It was the 1970s that saw the arrival of minimally invasive surgery – or keyhole surgery as it is also known. Instead of cutting open the body with large incisions, surgical tools are inserted through holes as small as 1 centimetre in diameter and controlled with external handles. Operations from stomach bypass to gall bladder removal are now done this way, reducing blood loss, pain and recovery time.

Combining keyhole surgery with the da Vinci system means the surgeon no longer handles the instruments directly, but via a computer console. This allows greater precision, as large hand gestures can be scaled down to small instrument movements, and any hand tremor is eliminated…

There are several ways that such robotic surgery may be further enhanced. Various articulated, snake-like tools are being developed to access hard-to-reach areas. One such device, the “i-Snake”, is controlled by a vision-tracking device worn over the surgeon’s eyes…

With further advances in miniaturisation, the opportunities grow for getting medical devices inside the body in novel ways. One miniature device that is already tried and tested is a camera in a capsule small enough to be swallowed…

The 20-millimetre-long HeartLander has front and rear foot-pads with suckers on the bottom, which allow it to inch along like a caterpillar. The surgeon watches the device with X-ray video or a magnetic tracker and controls it with a joystick. Alternatively, the device can navigate its own path to a spot chosen by the surgeon…

While the robot could in theory be used in other parts of the body, in its current incarnation it has to be introduced through a keyhole incision thanks to its size and because it trails wires to the external control box. Not so for smaller robots under wireless control.

One such device in development is 5 millimetres long and just 1 millimetre in diameter, with 16 vibrating legs. Early versions of the “ViRob” had on-board power, but the developers decided that made it too bulky. Now it is powered externally, by a nearby electromagnet whose field fluctuates about 100 times a second, causing the legs to flick back and forth. The legs on the left and right sides respond best to different frequencies, so the robot can be steered by adjusting the frequency…

While the ViRob can crawl through tubes or over surfaces, it cannot swim. For that, the Israeli team are designing another device, called SwiMicRob, which is slightly larger than ViRob at 10 millimetres long and 3 millimetres in diameter. Powered by an on-board motor, the device has two tails that twirl like bacteria’s flagella. SwiMicRob may one day be used inside fluid-filled spaces such those within the spine, although it is at an earlier stage of development than ViRob.

Another group has managed to shrink a medibot significantly further – down to 0.9 millimetres by 0.3 millimetres – by stripping out all propulsion and steering mechanisms. It is pulled around by electromagnets outside the body. The device itself is a metal shell shaped like a finned American football and it has a spike on the end…

The Swiss team is also among several groups who are trying to develop medibots at a vastly smaller scale, just nanometres in size, but these are at a much earlier development stage. Shrinking to this scale brings a host of new challenges, and it is likely to be some time before these kinds of devices reach the clinic.

Brad Nelson, a roboticist at the Swiss Federal Institute of Technology (EHT) in Zurich, hopes that if millimetre-sized devices such as his ophthalmic robot prove their worth, they will attract more funding to kick-start nanometre-scale research. “If we can show small devices that do something useful, hopefully that will convince people that it’s not just science fiction.”

In summary: nanoscale medibots appear plausible, but there’s still a large amount of research and development required.

Kurzweil’s prediction on timescales

The book “The Scientific Conquest of Death“, containing Kurzweil’s essay, was published in 2004.  The online version is dated 2003.  In 2003, 2010 – the end of the decade – presumably looked a long way off.  In the essay, Kurzweil makes some predictions about the speed of progress towards Human Body Version 2.0:

By the end of this decade, computing will disappear as a separate technology that we need to carry with us.  We’ll routinely have high-resolution images encompassing the entire visual field written directly to our retinas from our eyeglasses and contact lenses (the Department of Defense is already using technology along these lines from Microvision, a company based in Bothell, Washington).  We’ll have very-high-speed wireless connection to the Internet at all times.  The electronics for all of this will be embedded in our clothing.  Circa 2010, these very personal computers will enable us to meet with each other in full-immersion, visual-auditory, virtual-reality environments as well as augment our vision with location- and time-specific information at all times.

Progress with miniaturisation of computers – and the adoption of smartphones – has been impressive since 2003.  However, it’s now clear that some of Kurzweil’s predictions were over-optimistic.  If his predictions for 2010 were over-optimistic, what should we conclude about his predictions for 2030?

The conflicting pace of technological progress

My own view of predictions is that they are far from “black and white”.  I’ve made my own share of predictions over the years, about the rate of progress with smartphone technologies.  I’ve also reflected on the fact that it’s difficult to draw conclusions about the rate of change.

For example, from my “Insight” essay from November 2006, “The conflicting pace of mobile technology“:

What’s the rate of improvement of mobile phones?  Disconcertingly, the answer is both “surprisingly fast” and “surprisingly slow”…

A good starting point is the comment made by Monitor’s Bhaskar Chakravorti in his book “The slow pace of fast change”, when he playfully dubbed a certain phenomenon as “Demi Moore’s Law”.  The phenomenon is that technology’s impact in an inner-connected marketplace often proceeds at only half the pace predicted by Moore’s Law.  The reasons for this slower-than-expected impact are well worth pondering:

  • New applications and services in a networked marketplace depend on simultaneous changes being coordinated at several different points in the value chain
  • Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make much less sense when viewed individually.

Sometimes this is called “the prisoner’s dilemma”.  It’s also known as “the chicken and egg problem”.

The most interesting (and the most valuable) smartphone services will require widespread joint action within the mobile industry, including maintaining openness to new ideas, new methods, and new companies.  It also requires a spirit of “cooperate before competing”.  If adjacent players in the still-formative smartphone value chain focus on fighting each other for dominance in our current small pie, it will prevent the stage-by-stage emergence of killer new services that will make the pie much larger for everyone’s benefit.

Thankfully, although the network effects of a complex marketplace can act to slow down the emergence of new innovations, while that market is still being formed, it can have the opposite effect once all the pieces of the smartphone open virtuous cycle have learned to collaborate with maximum effectiveness.  When that happens, the pace of mobile change can even exceed that predicted by Moore’s Law…

And from another essay in the same series, “A celebration of incremental improvement“, from February 2006:

We all know that it’s a perilous task to predict the future of technology.  The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars.  These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary.  Indeed, history is littered with curious and amusing examples of flawed predictions of the future.  You may well wonder, what’s different about smartphones, and about all the predictions made about them at 3GSM?

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects.  Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on.  Technology is not enough.  Especially for changes that are complex and demanding, no fewer than six other criteria should be satisfied as well:

  • The technological development has to satisfy a strong human need
  • The development has to be possible at a sufficiently attractive price to individual end users
  • The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle
  • There must be a clear evolutionary path whereby the eventual version of the technology can be attained through a series of incremental steps that are, individually, easier to achieve
  • When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be both open (to accept new ideas) and commercially attractive (to encourage the generation of new ideas, and, even more important, to encourage companies to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation)…

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications.  For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son.  The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century.  The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world!  As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As I said, the pace of technological development is far from being black-and-white.  Sometimes it proceeds slower than you expect, and at other times, it can proceed much quicker.

The missing ingredient

With the advantage of even more hindsight, there’s one more element that should be elevated, as frequently making the difference between new products arriving sooner and them arriving later: the degree of practical focus and effective priority placed by the relevant ecosystem on creating these products.  For medibots and other lifespan-enhancing technologies to move from science fiction to science fact will probably require changes in both public opinion and public action.

It’s All About Nanobots

In a famous scene from the movie, The Graduate, Benjamin’s mentor gives him career advice in a single word: “plastics.”  Today, that word might be “software,” or “biotechnology,” but in another couple of decades, the word is likely to be “nanobots.”  Nanobots—blood-cell-sized robots—will provide the means to radically redesign our digestive systems, and, incidentally, just about everything else.

In an intermediate phase, nanobots in the digestive tract and bloodstream will intelligently extract the precise nutrients we need, call for needed additional nutrients and supplements through our personal wireless local area network, and send the rest of the food we eat on its way to be passed through for elimination.

Blog at WordPress.com.