dw2

10 May 2015

When the future of smartphones was in doubt

It’s hard to believe it now. But ten years ago, the future of smartphones was in doubt.

At that time, I wrote these words:

Smartphones in 2005 are roughly where the Internet was in 1995. In 1995, there were, worldwide, around 20-40 million users of the Internet. That’s broadly the same number of users of smartphones there are in the world today. In 1995, people were debating the real value of Internet usage. Was it simply an indulgent plaything for highly technical users, or would it have lasting wider attraction? In 2005, there’s a similar debate about smartphones. Will smartphones remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

That was the opening paragraph in an essay which the Internet site Archive.org has preserved. The original location for the essay, the Symbian corporate website, has long since been retired, having been absorbed inside Nokia infrastructure in 2009 (and, perhaps, being absorbed in turn into Microsoft in 2014).

Symbian Way Back

The entire essay can be found here, warts and all. That essay was the first in a monthly series known as “David Wood Insight” which extended from September 2005 to September 2006. (The entire set still exists on Archive.org – and, for convenience, I’ve made a copy here.)

Ten years later, it seems to me that wearable computers in 2015 are roughly where smartphones were in 2005 (and where the Internet was in 1995). There’s considerable scepticism about their future. Will they remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

Some commentators look at today’s wearable devices, such as Google Glass and Apple Watch, and express disappointment. There are many ways these devices can be criticised. They lack style. They lack “must have” functionality. Their usability leaves a lot to be desired. Battery life is too short. And so on.

But, like smartphones before them – and like the world-wide web ten years earlier – they’re going to get much, much better as time passes. Positive feedback cycles will ensure that happens.

I share the view of Augmented Reality analyst Ori Inbar, who wrote the following a few months ago in an updated version of his “Smart Glasses Market Report”:

When contemplating the evolution of technology in the context of the evolution of humanity, augmented reality (AR) is inevitable.

Consider the innovation cycles of computing from mainframes, to personal computers, to mobile computing, to wearables: It was driven by our need for computers to get smaller, better, and cheaper. Wearables are exactly that – mini computers on track to shrink and disappear on our bodies. In addition, there is a fundamental human desire for larger and sharper displays – we want to see and feel the world at a deeper level. These two trends will be resolved with Augmented Reality; AR extends our natural senses and will become humans’ primary interface for interaction with the world.

If the adoption curve of mobile phones is to repeat itself with glasses – within 10 years, over 1 billion humans will be “wearing.”

The report is packed with insight – I fully recommend it. For example, here’s Ori’s depiction of four waves of adoption of smart glasses:

Smart Glasses Adoption

(For more info about Augmented Reality and smart glasses, readers may be interested in the forthcoming Augmented World Expo, held 8-10 June at the Santa Clara Convention Centre in Silicon Valley.)

What about ten more years into the future?

All being well, here’s what I might be writing some time around 2025, foreseeing the growing adoption of yet another wave of computers.

If 1995-2005 saw the growth of desktop and laptop computers and the world wide web, 2005-2015 saw the growing ubiquity of smartphones, and 2015-2025 will see the triumph of wearable computers and augmented reality, then 2025-2035 is likely to see the increasingly widespread usage of nanobots (nano-computers) that operate inside our bodies.

The focus of computer innovation and usage will move from portables to mobiles to wearables to insideables.

And the killer app of these embedded nanobots will be internal human enhancement:

  • Biological rejuvenation
  • Body and brain repair
  • Body and brain augmentation.

By 2025, these applications will likely be in an early, rudimentary state. They’ll be buggy, irritating, and probably expensive. With some justification, critics will be asking: Will nanobots remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

Advertisements

20 February 2013

The world’s most eminent sociologist highlights the technological singularity

It’s not every day that the world’s most eminent sociologist reveals himself as having an intense interest in the Technological Singularity, and urges that “Everyone should read the books of Ray Kurzweil”. That’s what happened this evening.

The speaker in question was Lord Anthony Giddens, one of whose many claims to fame is his description as “Tony Blair’s guru”.

His biography states that, “According to Google Scholar, he is the most widely cited sociologist in the world today.”

In support of that claim, a 2009 article in the Times Higher Education supplement notes the following:

Giddens trumps Marx…

A list published today by Times Higher Education reveals the most-cited academic authors of books in the humanities…

As one of the world’s pre-eminent sociologists, Anthony Giddens, the Labour peer and former director of the London School of Economics, will be used to academic accolades.

But even he may be pleased to hear that his books are cited more often than those of iconic thinkers such as Sigmund Freud and Karl Marx.

Lord Giddens, now emeritus professor at LSE and a life fellow at King’s College, Cambridge, is the fifth most-referenced author of books in the humanities, according to the list produced by scientific data analysts Thomson Reuters.

The only living scholar ranked higher is Albert Bandura, the Canadian psychologist and pioneer of social learning theory at Stanford University…

Freud enters the list in 11th place. The American linguist and philosopher Noam Chomsky, who is based at the Massachusetts Institute of Technology and whose political books have a broader readership than some of his peers in the list, is 15th…

Lord Giddens is now 75 years old. Earlier this evening, I saw for myself evidence of his remarkable calibre. He gave an hour-long lecture in front of a packed audience at the London School of Economics, without any notes or slides, and without any hesitation, deviation, or verbal infelicity. Throughout, his remarks bristled with compelling ideas. He was equally competent – and equally fluent – when it came to the question-and-answer portion of the event.

LSE Events

The lecture was entitled “Off the edge of history: the world in the 21st century”. From its description on the LSE website, I had already identified it as relevant to many of the themes that I seek to have discussed in the series of London Futurists meetups that I chair:

The risks we face, and the opportunities we have, in the 21st century are in many respects quite different from those experienced in earlier periods of history. How should we analyse and respond to such a world? What is a rational balance of optimism and pessimism? How can we plan for a future that seems to elude our grasp and in some ways is imponderable?

As the lecture proceeded, I was very pleasantly impressed by the sequence of ideas. I append here a lightly edited copy of the verbatim notes I took on my Psion Series 5mx, supplemented by a few additions from the #LSEGiddens tweet stream. Added afterwards: the LSE has made a podcast available of the talk.

My rough notes from the talk follow… (text in italics are my parenthetical comments)

This large lecture room is completely full, twenty minutes before the lecture is due to start. I’m glad I arrived early!

Today’s topic is work in progress – he’s writing a book on the same topic, “Off the edge of history”.

  • Note this is a very different thesis from “the end of history”.

His starting point is in the subject of geology – a long way from sociology. He’s been working on climate change for the last seven years. It’s his first time to work so closely with scientists.

Geologists tend to call the present age “the Holocene age” – the last 12,000 years. But a geologist called Paul Crutzen recommended that we should use a different term for the last 200 years or so – we’re now in the Anthropocene age:

  • In this period, human activity strongly influences nature and the environment
  • This re-orients and restructures the world of geology
  • A great deal of what used to be natural, is natural no longer
  • Human beings are invading nature, in a way that has no precedent
  • Even some apparently natural catastrophes, like tsunamis and volcanoes, might be linked to impacts from humans.

We have continuities from previous history (of course), but so many things are different nowadays. One example is the impacts of new forms of biological threat. Disease organisms have skipped from animals to human beings. New disease organisms are being synthesised.

There are threats facing us, which are in no ways extensions of previous threats.

For example, what is the Internet doing to the world? Is it a gigantic new mind? Are you using the mobile phone, or is the mobile phone using you? There’s no parallel from previous periods. Globally connected electronic communications are fundamentally different from what went before.

When you are dealing with risks you’ve never experienced before, you can’t measure them. You’ll only know for sure when it’s too late. We’re on the edge of history because we are dealing with risks we have never faced before.

Just as we are invading nature, we are invading human nature in a way that’s unprecedented.

Do you know about the Singularity? (A smattering of people in the audience raise their hands.) It’s mind-blowing. You should find out about it:

  • It’s based on a mathematical concept
  • It’s accelerating processes of growth, rapidly disappearing to a far off point very different from today.

Everyone should read the books of Ray Kurzweil – who has recently become an Engineering Director at Google.

Kurzweil’s book makes it clear that:

  • Within our lifetimes, human beings will no longer be human beings
  • There are multiple accelerating rates of change in several different disciplines
  • The three main disciplines contributing to the singularity are nanotech, AI, and biotech
  • All are transforming our understanding of the human body and, more importantly, the human mind
  • This is described by the “Law of accelerating returns”
  • Progress is not just linear but geometrical.

This book opens our minds to multiple possibilities of what it means to be human, as technology penetrates us.

Nanotech is like humans playing God:

  • It’s a level below DNA
  • We can use it to rebuild many parts of the human body, and other artefacts in the world.

Kurzweil states that human beings will develop intelligence which is 100x higher than at present:

  • Because of merging of human bodies with computers
  • Because of the impact of nanotech.

Kurzweil gives this advice: if you are relatively young: live long, in order to live forever:

  • Immortality is no longer a religious concept, it’s now a tangible prospect
  • It could happen in the next 20-40 years.

This is a fantastic expansion of what it means to be human. Importantly, it’s a spread of opportunities and risk.

These were religious notions before. Now we have the real possibility of apocalypse – we’ve had it since the 1950s, when the first thermonuclear weapons were invented. The possibility of immortality has become real too.

We don’t know how to chart these possibilities. None of us know how to fill in that gap.

What science fiction writers were writing 20 years ago, is now in the newspapers everyday. Reading from the Guardian from a couple of days ago:

Paralysed people could get movement back through thought control

Brain implant could allow people to ‘feel’ the presence of infrared light and one day be used to move artificial limbs

Scientists have moved closer to allowing paralysed people to control artificial limbs with their thoughts following a breakthrough in technology…

…part of a series of sessions on advances in brain-machine interfaces, at which other scientists presented a bionic hand that could connect directly to the nerves in a person’s arm and provide sensory feedback of what they were holding.

Until now, neurological prosthetics have largely been demonstrated as a way to restore a loss of function. Last year, a 58-year-old woman who had become paralysed after a stroke demonstrated that she could use a robotic arm to bring a cup of coffee to her mouth and take a sip, just by thinking about it…

In the future…  it might be possible to use prosthetic devices to restore vision – for example, if a person’s visual cortex had been damaged – by training a different part of the brain to process the information.

Or you could even augment normal brain function in non-invasive ways to deliver the information.

We could learn to detect other sorts of signals that we normally don’t see or experience; the perceptual range could increase.

These things are real; these things are happening. There is a kind of geometric advance.

The literature of social scientists has a big division here, between doomsday thinkers and optimists, with respected thinkers in both camps.

Sir Martin Rees is example of first category. He wrote a book called “Our final century”:

  • It examines forms of risk that could destroy our society
  • Climate change is a huge existential risk – most people aren’t aware of it
  • Nanotech is another existential risk – grey goo scenario
  • We also have lots of weaponry: drones circulating above the world even as we speak
  • Most previous civilisations have ended in disaster – they subverted themselves
  • For the first time, we have a civilisation on a global scale
  • It could well be our final century.

Optimists include Matt Ridley, a businessman turned scientist, and author of the book “The rational optimist”:

  • Over the course of human civilisation there is progress – including progress in culture, and medical advances.

This is a big division. How do we sort this out? His view: it’s not possible to decide. We need to recognise that we live in a “high opportunity, high risk society”:

  • The level of opportunity and level of risk are both much higher than before
  • But risk and opportunity always intertwine
  • “In every risk there’s an opportunity…” and vice versa
  • We must be aware of the twists and tangles of risk and opportunity – their interpenetration.

Studying this area has led him to change some of his views from before:

  • He now sees the goal of sustainability as a harder thing than before
  • Living within our limits makes sense, but we no longer know what our limits are
  • We have to respect limits, but also recognise that limits can be changed.

For example, could we regard a world population of 9 billion people as an opportunity, rather than just a risk?

  • It would lead us to put lots more focus on food innovation, blue sky tech for agriculture, social reform, etc – all good things.

A few points to help us sort things out:

  1. One must never avoid risk – we live in a world subject to extreme system risk; we mustn’t live in denial of risk in our personal life (like denying the risks of smoking or riding motor cycles) or at an civilisational level
  2. We have to think about the future in a very different way, because the future has become opaque to us; the enlightenment thought was that we would march in and make sense of history (Marx had similar thoughts), but it turns out that the future is actually opaque – for our personal lives too as well as society (he wonders whether the EU will still exist by the time he finishes his book on the future of the EU!)
  3. We’ll have to learn to backcast rather than forecast – to borrow an idea from the study of climate change. We have to think ahead, and then think back.

This project is the grand task of social sciences in the 21st century.

One more example: the possibility of re-shoring of jobs in the US and EU:

  • 3D printing is an unbelievable technological invention
  • 3D printers can already print shoes
  • A printer in an MIT lab can print whole systems – eg in due course a plane which will fly directly out of the computer
  • This will likely produce a revolution in manufacturing – many, many implications.

Final rhetorical question: As we confront this world, should we be pessimists or optimists? This is the same question he used to consider, at the end of the talks he used to give on climate change.

His answer: we should bracket out that opposition; it’s much more important to be rational than either pessimist or optimist:

  • Compare the case of someone with very serious cancer – they need more than wishful thinking. Need rational underpinning of optimism and/or pessimism.

Resounding applause from the audience. Then commence questions and answers.

Q: Are today’s governance structures, at local and national levels, fit to deal with these issues?

A: No. For example, the he European Union has proved not to be the vanguard of global governance that we hoped it would be. Climate change is another clear example: twenty years of UN meetings with no useful outcome whatsoever.

Q: Are our human cognitive powers capable to deal with these problems? Is there a role for technology to assist our cognitive powers?

A: Our human powers are facing a pretty difficult challenge. It’s human nature to put off what we don’t have to do today. Like 16 years taking up smoking who can’t really see themselves being 40. Maybe a supermind might be more effective.

Q: Although he has given examples where current governance models are failing, are there any bright spots of hope for governance? (The questioner in this case was me.)

A: There are some hopeful signs for economic governance. Surely bankers will not get away with what they’ve done. Movement to address tax havens (“onslaught”) – bring the money back as well as bringing the jobs back. Will require global co-operation. Nuclear proliferation (Iran, Israel) is as dangerous as climate change. The international community has done quite well with non-proliferation, but it only takes one nuclear war for things to go terribly wrong.

Q: What practical advice would he give to the Prime Minister (or to Ed Miliband)?

A: He supports Ed Miliband trying to restructure capitalism; there are similar moves happening in the US too. However, with global issues like these, any individual prime minister is limited in his influence. For better or for worse, Ray Kurzweil has more influence than any politician!

(Which is a remarkable thing to say, for someone who used to work so closely with Prime Minister Tony Blair…)

20 December 2012

An absorbing, challenging vision of near-future struggles

nexus-75-dpiTechnology can cause carnage, and in the wake of the carnage, outrage.

Take the sickening example of the shooting dead of 20 young children and six adults at Sandy Hook Elementary School in Newtown, Connecticut. After that fearful carnage, it’s no surprise that there are insistent calls to restrict the availability of powerful automatic guns.

There are similar examples of carnage and outrage in the new science fiction novel “Nexus: mankind gets an upgrade”, by the noted futurist and writer Ramez Naam.

I met Ramez at the WorldFuture 2012 event in Toronto earlier this year, where he gave a presentation on “Can Innovation Save the Planet?” which I rated as one of the very best sessions in the midst of a very good conference. I’ve been familiar with the high calibre of his thinking for some time, so when I heard that his new book Nexus was available for download to my Kindle – conveniently just ahead of me taking a twelve-hour flight – I jumped at the chance to purchase a copy. It turned out to be a great impulse purchase decision. I finished the book just as the airplane wheels touched down.

The type of technology that is linked to carnage and outrage in Nexus can be guessed from the image on the front cover of the book – smart drugs. Of course, drugs, like guns, are already the source of huge public debate in terms of whether to restrict access. Events described in Nexus make it clear why certain drugs become even more controversial, a few short decades ahead, in this fictional but all-too-credible vision of the near future.

Back in the real world, public interest in smart drugs is already accelerating:

  • I hear more and more discussions when people talk about taking nootropics of one sort or another – to help them “pull an all-nighter”, or to be especially sharp and mentally focused for an important interview. These comments often get followed up by reflections on whether these drugs might convey an unfair advantage.
  • The 2011 film Limitless – which I reviewed in passing here – helped to raise greater public awareness of the potential of this technology.
  • Audience attendance (and the subsequent online debate) at the recent London Futurist event “Hacking our wetware, with Andrew Vladimirov”, convinced me that public appetite for information on smart drugs is about to greatly intensify.

And as discussion of the technology of smart drugs increases, so (quite rightly) does discussion of the potential downsides and drawbacks of that technology.

Nexus is likely to ratchet this interest even higher. The technology in the novel doesn’t just add a few points of IQ, in a transitory basis, to the people who happen to take it. It goes much further than that. It has the potential to radically upgrade humans – with as big a jump in evolution (in the course of a few decades) as the transition between apes and humans. And not everyone likes that potential, for reasons that the book gradually makes credible, through sympathetic portrayals of various kinds of carnage.

Nexus puts the ideas of transhumanism and posthumanism clearly on the map. And lots more too, which I shouldn’t say much about, to avoid giving away the plot and spoiling the enjoyment of new readers.

But I will say this:

  • My own background as a software engineer (a profession I share with Ramez Naam) made me especially attuned to the descriptions of the merging of computing science ideas with those of smart drugs; other software engineers are likely to enjoy these speculations too
  • My strong interest in the battle of ideas about progress made me especially interested in inner turmoil (and changes of mind) of various key characters, as they weighed up the upsides and downsides of making new technology more widely available
  • My sympathy for the necessity of an inner path to enlightenment, to happen in parallel with increasingly smart deployment of increasingly powerful technology, meant that I was intrigued by some of the scenes in the book involving meditative practices
  • My status as an aspiring author myself – I’m now about one third of the way through the book I’m writing – meant that I took inspiration from seeing how a good author can integrate important ideas about technology, philosophy, societal conflict, and mental enlightenment, in a cracking good read.

Ramez is to be congratulated on writing a book that should have wide appeal, and which will raise attention to some very important questions – ahead of the time when rapid improvements of technology might mean that we have missed our small window of opportunity to steer these developments in ways that augment, rather than diminish, our collective humanity.

Anyone who thinks of themselves as a futurist should do themselves a favour and read this book, in order to participate more fully in the discussions which it is bound to catalyse.

Footnote: There’s a lot of strong language in the book, and “scenes of an adult nature”. Be warned. Some of the action scenes struck me as implausible – but hey, that’s the same for James Bond and Jason Bourne, so that’s no showstopper. Which prompts the question – could Nexus be turned into a film? I hope so!

2 November 2012

The future of human enhancement

Is it ethical to put money and resources into trying to develop technological enhancements for human capabilities, when there are so many alternative well-tested mechanisms available to address pressing problems such as social injustice, poverty, poor sanitation, and endemic disease? Is that a failure of priority? Why make a strenuous effort in the hope of allowing an elite few individuals to become “better than well”, courtesy of new technology, when so many people are currently so “less than well”?

These were questions raised by Professor Anne Kerr at a public debate earlier this week at the London School of Economics: The Ethics of Human Enhancement.

The event was described as follows on the LSE website:

This dialogue will consider how issues related to human enhancement fit into the bigger picture of humanity’s future, including the risks and opportunities that will be created by future technological advances. It will question the individualistic logic of human enhancement and consider the social conditions and consequences of enhancement technologies, both real and imagined.

From the stage, Professor Kerr made a number of criticisms of “individualistic logic” (to use the same phrase as in the description of the event). Any human enhancements provided by technology, she suggested, would likely only benefit a minority of individuals, potentially making existing social inequalities even worse than at present.

She had a lot of worries about technology amplifying existing human flaws:

  • Imagine what might happen if various clever people could take some pill to make themselves even cleverer? It’s well known that clever people often make poor decisions. Their cleverness allows them to construct beguiling sophistry to justify the actions they already want to take. More cleverness could mean even more beguiling sophistry.
  • Or imagine if rapacious bankers could take drugs to boost their workplace stamina and self-serving brainpower – how much more effective they would become at siphoning off public money to their own pockets!
  • Might these risks be addressed by public policy makers, in a way that would allow benefits of new technology, without falling foul of the potential downsides? Again, Professor Kerr was doubtful. In the real world, she said, policy makers cannot operate at that level. They are constrained by shorter-term thinking.

For such reasons, Professor Kerr was opposed to these kinds of technology-driven human enhancements.

When the time for audience Q&A arrived, I felt bound to ask from the floor:

Professor Kerr, would you be in favour of the following examples of human enhancement, assuming they worked?

  1. An enhancement that made bankers more socially attuned, with more empathy, and more likely to use their personal wealth in support of philanthropic projects?
  2. An enhancement that made policy makers less parochial, less politically driven, and more able to consider longer-term implications in an objective manner?
  3. And an enhancement that made clever people less likely to be blind to their own personal cognitive biases, and more likely to genuinely consider counters to their views?

In short, would you support enhancements that would make people wiser as well as smarter, and kinder as well as stronger?

The answer came quickly:

No. They would not work. And there are other means of achieving the same effects, including progress of democratisation and education.

I countered: These other methods don’t seem to be working well enough. If I had thought more quickly, I would have raised examples such as society’s collective failure to address the risk of runaway climate change.

Groundwork for this discussion had already been well laid by the other main speaker at the event, Professor Nick Bostrom. You can hear what Professor Bostrom had to say – as well as the full content of the debate – in an audio recording of the event that is available here.

(Small print: I’ve not yet taken the time to review the contents of this recording. My description in this blogpost of some of the verbal exchanges inevitably paraphrases and extrapolates what was actually said. I apologise in advance for any mis-representation, but I believe my summary to be faithful to the spirit of the discussion, if not to the actual words used.)

Professor Bostrom started the debate by mentioning that the question of human enhancement is a big subject. It can be approached from a shorter-term policy perspective: what rules should governments set, to constrain the development and application of technological enhancements, such as genetic engineering, neuro-engineering, smart drugs, synthetic biology, nanotechnology, and artificial general intelligence? It can also be approached from the angle of envisioning larger human potential, that would enable the best possible future for human civilisation. Sadly, much of the discussion at the LSE got bogged down in the shorter-term question, and lost sight of the grander accomplishments that human enhancements could bring.

Professor Bostrom had an explanation for this lack of sustained interest in these larger possibilities: the technologies for human enhancement that are currently available do not work that well:

  • Some drugs give cyclists or sprinters an incremental advantage over their competitors, but the people who take these drugs still need to train exceptionally hard, to reach the pinnacle of their performance
  • Other drugs seem to allow students to concentrate better over periods of time, but their effects aren’t particularly outstanding, and it’s possible that methods such as good diet, adequate rest, and meditation, have results that are at least as significant
  • Genetic selection can reduce the risk of implanted embryos developing various diseases that have strong genetic links, but so far, there is no clear evidence that genetic selection can result in babies with abilities higher than the general human range.

This lack of evidence of strong tangible results is one reason why Professor Kerr was able to reply so quickly to my suggestion about the three kinds of technological enhancements, saying these enhancements would not work.

However, I would still like to press they question: what if they did work? Would we want to encourage them in that case?

A recent article in the Philosophy Now journal takes the argument one step further. The article was co-authored by Professors Julian Savulescu and Ingmar Persson, and draws material from their book “Unfit for the Future: The Need for Moral Enhancement”.

To quote from the Philosophy Now article:

For the vast majority of our 150,000 years or so on the planet, we lived in small, close-knit groups, working hard with primitive tools to scratch sufficient food and shelter from the land. Sometimes we competed with other small groups for limited resources. Thanks to evolution, we are supremely well adapted to that world, not only physically, but psychologically, socially and through our moral dispositions.

But this is no longer the world in which we live. The rapid advances of science and technology have radically altered our circumstances over just a few centuries. The population has increased a thousand times since the agricultural revolution eight thousand years ago. Human societies consist of millions of people. Where our ancestors’ tools shaped the few acres on which they lived, the technologies we use today have effects across the world, and across time, with the hangovers of climate change and nuclear disaster stretching far into the future. The pace of scientific change is exponential. But has our moral psychology kept up?…

Our moral shortcomings are preventing our political institutions from acting effectively. Enhancing our moral motivation would enable us to act better for distant people, future generations, and non-human animals. One method to achieve this enhancement is already practised in all societies: moral education. Al Gore, Friends of the Earth and Oxfam have already had success with campaigns vividly representing the problems our selfish actions are creating for others – others around the world and in the future. But there is another possibility emerging. Our knowledge of human biology – in particular of genetics and neurobiology – is beginning to enable us to directly affect the biological or physiological bases of human motivation, either through drugs, or through genetic selection or engineering, or by using external devices that affect the brain or the learning process. We could use these techniques to overcome the moral and psychological shortcomings that imperil the human species.

We are at the early stages of such research, but there are few cogent philosophical or moral objections to the use of specifically biomedical moral enhancement – or moral bioenhancement. In fact, the risks we face are so serious that it is imperative we explore every possibility of developing moral bioenhancement technologies – not to replace traditional moral education, but to complement it. We simply can’t afford to miss opportunities…

In short, the argument of Professors Savulescu and Persson is not just that we should allow the development of technology that can enhance human reasoning and moral awareness, but that we must strongly encourage it. Failure to do so would be to commit a grave error of omission.

These arguments about moral imperative – what technologies should we allow to be developed, or indeed encourage to be developed – are in turn strongly influenced by our beliefs about what technologies are possible. It’s clear to me that many people in positions of authority in society – including academics as well as politicians – are woefully unaware about realistic technology possibilities. People are familiar with various ideas as a result of science fiction novels and movies, but it’s a different matter to know the division between “this is an interesting work of fiction” and “this is a credible future that might arise within the next generation”.

What’s more, when it comes to people forecasting the likely progress of technological possibilities, I see a lot of evidence in favour of the observation made by Roy Amara, long-time president of the Institute for the Future:

We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.

What about the technologies mentioned by Professors Savulescu and Persson? What impact will be possible from smart drugs, genetic selection and engineering, and the use of external devices that affect the brain or the learning process? In the short term, probably less than many of us hope; in the longer term, probably more than most of us expect.

In this context, what is the “longer term”? That’s the harder question!

But the quest to address this kind of question, and then to share the answers widely, is the reason I have been keen to support the growth of the London Futurist meetup, by organising a series of discussion meetings with well-informed futurist speakers. Happily, membership has been on the up-and-up, reaching nearly 900 by the end of October.

The London Futurist event happening this weekend – on the afternoon of Saturday 3rd November – picks up the theme of enhancing our mental abilities. The title is “Hacking our wetware: smart drugs and beyond – with Andrew Vladimirov”:

What are the most promising methods to enhance human mental and intellectual abilities significantly beyond the so-called physiological norm? Which specific brain mechanisms should be targeted, and how?  Which aspects of wetware hacking are likely to grow in prominence in the not-too-distant future?

By reviewing a variety of fascinating experimental findings, this talk will explore:

  • various pharmacological methods, taking into account fundamental differences in Eastern and Western approaches to the development and use of nootropics
  • the potential of non-invasive neuro-stimulation using CES (Cranial Electrotherapy Stimulation) and TMS (Transcranial Magnetic Stimulation)
  • data suggesting the possibility to “awaken” savant-like skills in healthy humans without paying the price of autism
  • apparent means to stimulate seemingly paranormal abilities and transcendental experiences
  • potential genetic engineering perspectives, aiming towards human cognition enhancement.

The advance number of positive RSVPs for this talk, as recorded on the London Futurist meetup site, has reached 129 at the time of writing – which is already a record.

(From my observations, I have developed the rule of thumb that the number of people who actually turn up for a meeting is something like 60%-75% of the number of positive RSVPs.)

I’ll finish by returning to the question posed at the beginning of my posting:

  • Are these technological enhancements likely to increase human inequality (by benefiting only a small number of users),
  • Or are they instead likely to drop in price and grow in availability (the same as happened, for example, with smartphones, Internet access, and many other items of technology)?

My answer – which I believe is shared by Professor Bostrom – is that things could still go either way. That’s why we need to think hard about their development and application, ahead of time. That way, we’ll become better informed to help influence the outcome.

29 July 2011

Towards a mind-stretching weekend in New York

Filed under: AGI, futurist, leadership, nanotechnology, robots, Singularity — David Wood @ 9:19 pm

I’ve attended the annual Singularity Summit twice before – in 2008 and in 2009.  I’ve just registered to attend the 2011 event, which is taking place in New York on 15th-16th October.  Here’s why.

On both previous occasions, the summits featured presentations that gave me a great deal to think about, on arguably some of the most significant topics in human history.  These topics include the potential emergence, within the lifetimes of many people alive today, of:

  • Artificial intelligence which far exceeds the capabilities of even the smartest group of humans
  • Robots which far exceed the dexterity, balance, speed, strength, and sensory powers of even the best human athletes, sportspeople, or soldiers
  • Super-small nanobots which can enter the human body and effect far more thorough repairs and enhancements – to both body and mind – than even the best current medical techniques.

True, at the previous events, there were some poor presentations too – which is probably inevitable given the risky cutting-edge nature of the topics being covered.  But the better presentations far outweighed the worse ones.

And as well as the presentations, I greatly enjoyed the networking with the unusual mix of attendees – people who had taken the time to explore many of the fascinating hinterlands of modern technology trends.  If someone is open-minded enough to give serious thought to the ideas listed above, they’re often open-minded enough to entertain lots of other unconventional ideas too.  I frequently found myself in disagreement with these attendees, but the debate was deeply refreshing.

Take a look at the list of confirmed speakers so far: which of these people would you most like to bounce ideas off?

The summit registration page is now open.  As I type these words, that page states that the cost of tickets is going to increase after 31 July.  That’s an argument for registering sooner rather than later.

To provide more information, here’s a copy of the press release for the event:

Singularity Summit 2011 in New York City to Explore Watson Victory in Jeopardy

New York, NY This October 15-16th in New York City, a TED-style conference gathering innovators from science, industry, and the public will discuss IBM’s ‘Watson’ computer and other exciting developments in emerging technologies. Keynote speakers at Singularity Summit 2011 include Jeopardy! champion Ken Jennings and famed futurist and inventor Ray Kurzweil. After losing to an IBM computer in Jeopardy!, Jennings wrote, “Just as factory jobs were eliminated in the 20th century by new assembly-line robots, Brad and I were the first knowledge-industry workers put out of work by the new generation of ‘thinking’ machines. ‘Quiz show contestant’ may be the first job made redundant by Watson, but I’m sure it won’t be the last.”

In February, Watson defeated two human champions in Jeopardy!, the game show famous for its mind-bending trivia questions. Surprising millions of TV viewers, Watson took down champions Ken Jennings and Brad Rutter for the $1 million first prize. Facing defeat on the final show, competitor Ken Jennings jokingly wrote in parentheses on his last answer: “I for one welcome our new computer overlords.” Besides Watson, the Singularity Summit 2011 will feature speakers on robotics, nanotechnology, biotechnology, futurism, and other cutting-edge technologies, and is the only conference to focus on the technological Singularity.

Responding to Watson’s victory, leading computer scientist Ray Kurzweil said, “Watson is a stunning example of the growing ability of computers to successfully invade this supposedly unique attribute of human intelligence.” In Kurzweil’s view, the combination of language understanding and pattern recognition that Watson displays would make its descendants “far superior to a human”. Kurzweil is known for predicting computers whose conversations will be indistinguishable from people by 2029.

Beyond artificial intelligence, the Singularity Summit will also focus on high-tech and where it is going. Economist Tyler Cowen will examine the economic impacts of emerging technologies. Cowen argued in his recent book The Great Stagnation that modern society is on a technological plateau where “a lot of our major innovations are springing up in sectors where a lot of work is done by machines, not by human beings.” Tech entrepreneur and investor Peter Thiel, who sits on the board of directors of Facebook, will share his thoughts on innovation and jumpstarting the economy.

Other speakers include MIT cosmologist Max Tegmark, Allen Brain Institute chief scientist Christof Koch, co-founder of Skype Jaan Tallinn, robotics professors James McLurkin and Robin Murphy, Bionic Builders host Casey Pieretti, the MIT Media Lab’s Riley Crane, MIT polymath Alexander Wissner-Gross, filmmaker and television personality Jason Silva, and Singularity Institute artificial intelligence researcher Eliezer Yudkowsky.

28 December 2009

Ten emerging technology trends to watch in the 2010’s

Filed under: AGI, nanotechnology, vision — David Wood @ 12:38 pm

On his “2020 science” blogAndrew Maynard of the Woodrow Wilson International Center for Scholars has published an excellent article “Ten emerging technology trends to watch over the next decade” that’s well worth reading.

To whet appetites, here’s his list of the ten emerging technologies:

  1. Geoengineering
  2. Smart grids
  3. Radical materials
  4. Synthetic biology
  5. Personal genomics
  6. Bio-interfaces
  7. Data interfaces
  8. Solar power
  9. Nootropics
  10. Cosmeceuticals

For the details, head over to the original article.

I see Andrew’s article as a more thorough listing of what I tried to cover in my own recent article, Predictions for the decade ahead, where I wrote:

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

Neither the words “nanotechnology” or “AI” appear in Andrew’s list.  Here’s what he has to say about nanotechnology:

Nanotech has been a dominant emerging technologies over the past ten years.  But in many ways, it’s a fake.  Advances in the science of understanding and manipulating matter at the nanoscale are indisputable, as are the early technology outcomes of this science.  But nanotechnology is really just a convenient shorthand for a whole raft of emerging technologies that span semiconductors to sunscreens, and often share nothing more than an engineered structure that is somewhere between 1 – 100 nanometers in scale.  So rather than focus on nanotech, I decided to look at specific technologies which I think will make a significant impact over the next decade.  Perhaps not surprisingly though, many of them depend in some way on working with matter at nanometer scales.

I think we are both right 🙂

Regarding AI, Andrew’s comments under the heading “Data interfaces” cover some of what I had in mind:

The amount of information available through the internet has exploded over the past decade.  Advances in data storage, transmission and processing have transformed the internet from a geek’s paradise to a supporting pillar of 21st century society.  But while the last ten years have been about access to information, I suspect that the next ten will be dominated by how to make sense of it all.  Without the means to find what we want in this vast sea of information, we are quite literally drowning in data.  And useful as search engines like Google are, they still struggle to separate the meaningful from the meaningless.  As a result, my sense is that over the next decade we will see some significant changes in how we interact with the internet.  We’re already seeing the beginnings of this in websites like Wolfram Alpha that “computes” answers to queries rather than simply returning search hits,  or Microsoft’s Bing, which helps take some of the guesswork out of searches.  Then we have ideas like The Sixth Sense project at the MIT Media Lab, which uses an interactive interface to tap into context-relevant web information.  As devices like phones, cameras, projectors, TV’s, computers, cars, shopping trolleys, you name it, become increasingly integrated and connected, be prepared to see rapid and radical changes in how we interface with and make sense of the web.

It looks like there’s lots of other useful material on the same blog.  I particularly like its subtitle “Providing a clear perspective on developing science and technology responsibly”.

Hat tip to @vangeest for the pointer!

24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

22 November 2009

Timescales for Human Body Version 2.0

Filed under: aging, Kurzweil, nanotechnology — David Wood @ 7:21 pm

In the coming decades, a radical upgrading of our body’s physical and mental systems, already underway, will use nanobots to augment and ultimately replace our organs. We already know how to prevent most degenerative disease through nutrition and supplementation; this will be a bridge to the emerging biotechnology revolution, which in turn will be a bridge to the nanotechnology revolution. By 2030, reverse-engineering of the human brain will have been completed and nonbiological intelligence will merge with our biological brains.

The paragraph above is the abstract for the chapter by Ray Kurzweil in the book “The Scientific Conquest of Death“.  In that chapter, Ray sets out a vision for a route to indefinite human lifespans.

Here are a few highlights from the essay:

It’s All About Nanobots

In a famous scene from the movie, The Graduate, Benjamin’s mentor gives him career advice in a single word: “plastics.”  Today, that word might be “software,” or “biotechnology,” but in another couple of decades, the word is likely to be “nanobots.”  Nanobots—blood-cell-sized robots—will provide the means to radically redesign our digestive systems, and, incidentally, just about everything else.

In an intermediate phase, nanobots in the digestive tract and bloodstream will intelligently extract the precise nutrients we need, call for needed additional nutrients and supplements through our personal wireless local area network, and send the rest of the food we eat on its way to be passed through for elimination.

If this seems futuristic, keep in mind that intelligent machines are already making their way into our blood stream.  There are dozens of projects underway to create blood-stream-based “biological microelectromechanical systems” (bioMEMS) with a wide range of diagnostic and therapeutic applications.  BioMEMS devices are being designed to intelligently scout out pathogens and deliver medications in very precise ways…

A key question in designing this technology will be the means by which these nanobots make their way in and out of the body.  As I mentioned above, the technologies we have today, such as intravenous catheters, leave much to be desired.  A significant benefit of nanobot technology is that unlike mere drugs and nutritional supplements, nanobots have a measure of intelligence.  They can keep track of their own inventories, and intelligently slip in and out of our bodies in clever ways.  One scenario is that we would wear a special “nutrient garment” such as a belt or undershirt.  This garment would be loaded with nutrient bearing nanobots, which would make their way in and out of our bodies through the skin or other body cavities.

At this stage of technological development, we will be able to eat whatever we want, whatever gives us pleasure and gastronomic fulfillment, and thereby unreservedly explore the culinary arts for their tastes, textures, and aromas.  At the same time, we will provide an optimal flow of nutrients to our bloodstream, using a completely separate process.  One possibility would be that all the food we eat would pass through a digestive tract that is now disconnected from any possible absorption into the bloodstream.

Elimination

This would place a burden on our colon and bowel functions, so a more refined approach will dispense with the function of elimination.  We will be able to accomplish this using special elimination nanobots that act like tiny garbage compactors.  As the nutrient nanobots make their way from the nutrient garment into our bodies, the elimination nanobots will go the other way.  Periodically, we would replace the nutrition garment for a fresh one.  One might comment that we do obtain some pleasure from the elimination function, but I suspect that most people would be happy to do without it.

Ultimately we won’t need to bother with special garments or explicit nutritional resources.  Just as computation will eventually be ubiquitous and available everywhere, so too will basic metabolic nanobot resources be embedded everywhere in our environment.  In addition, an important aspect of this system will be maintaining ample reserves of all needed resources inside the body.  Our version 1.0 bodies do this to only a very limited extent, for example, storing a few minutes of oxygen in our blood, and a few days of caloric energy in glycogen and other reserves.  Version 2.0 will provide substantially greater reserves, enabling us to be separated from metabolic resources for greatly extended periods of time.

Once perfected, we will no longer need version 1.0 of our digestive system at all.  I pointed out above that our adoption of these technologies will be cautious and incremental, so we will not dispense with the old-fashioned digestive process when these technologies are first introduced.  Most of us will wait for digestive system version 2.1 or even 2.2 before being willing to do dispense with version 1.0.  After all, people didn’t throw away their typewriters when the first generation of word processors was introduced.  People held onto their vinyl record collections for many years after CDs came out (I still have mine).  People are still holding onto their film cameras, although the tide is rapidly turning in favor of digital cameras.

However, these new technologies do ultimately dominate, and few people today still own a typewriter.  The same phenomenon will happen with our reengineered bodies.  Once we’ve worked out the inevitable complications that will arise with a radically reengineered gastrointestinal system, we will begin to rely on it more and more.

Programmable Blood

As we reverse-engineer (learn the principles of operation of) our various bodily systems, we will be in a position to engineer new systems that provide dramatic improvements.  One pervasive system that has already been the subject of a comprehensive conceptual redesign is our blood…

I’ve personally watched (through a microscope) my own white blood cells surround and devour a pathogen, and I was struck with the remarkable sluggishness of this natural process.  Although replacing our blood with billions of nanorobotic devices will require a lengthy process of development, refinement, and regulatory approval, we already have the conceptual knowledge to engineer substantial improvements over the remarkable but very inefficient methods used in our biological bodies…

Have a Heart, or Not

The next organ on my hit list is the heart.  It’s a remarkable machine, but it has a number of severe problems.  It is subject to a myriad of failure modes, and represents a fundamental weakness in our potential longevity.  The heart usually breaks down long before the rest of the body, and often very prematurely.

Although artificial hearts are beginning to work, a more effective approach will be to get rid of the heart altogether.  Designs include nanorobotic blood cell replacements that provide their own mobility.  If the blood system moves with its own movement, the engineering issues of the extreme pressures required for centralized pumping can be eliminated.  As we perfect the means of transferring nanobots to and from the blood supply, we can also continuously replace the nanobots comprising our blood supply…

So What’s Left?

Let’s consider where we are.  We’ve eliminated the heart, lungs, red and white blood cells, platelets, pancreas, thyroid and all the hormone-producing organs, kidneys, bladder, liver, lower esophagus, stomach, small intestines, large intestines, and bowel.  What we have left at this point is the skeleton, skin, sex organs, mouth and upper esophagus, and brain…

Redesigning the Human Brain

The process of reverse engineering and redesign will also encompass the most important system in our bodies: the brain.  The brain is at least as complex as all the other organs put together, with approximately half of our genetic code devoted to its design.  It is a misconception to regard the brain as a single organ.  It is actually an intricate collection of information-processing organs, interconnected in an elaborate hierarchy, as is the accident of our evolutionary history.

The process of understanding the principles of operation of the human brain is already well under way.  The underlying technologies of brain scanning and neuron modeling are scaling up exponentially, as is our overall knowledge of human brain function.  We already have detailed mathematical models of a couple dozen of the several hundred regions that comprise the human brain.

The age of neural implants is also well under way.  We have brain implants based on “neuromorphic” modeling (i.e., reverse-engineering of the human brain and nervous system) for a rapidly growing list of brain regions.  A friend of mine who became deaf while an adult can now engage in telephone conversations again because of his cochlear implant, a device that interfaces directly with the auditory nervous system.  He plans to replace it with a new model with a thousand levels of frequency discrimination, which will enable him to hear music once again.  He laments that he has had the same melodies playing in his head for the past 15 years and is looking forward to hearing some new tunes.  A future generation of cochlear implants now on the drawing board will provide levels of frequency discrimination that go significantly beyond that of “normal” hearing…

And the essay continues.  It’s well worth reading in its entirety.  A short websearch finds a slightly longer version of the same essay online, on Kurzweil’s own website, along with a conceptual illustration by media artist and philosopher Natasha Vita-More:

Evaluating the vision: the questions

Three main questions arise in response to this vision of “Human Body Version 2.0”:

  1. Is the vision technologically feasible?
  2. Is the vision morally attractive?
  3. Within what timescales might the vision become feasible?

Progress: encouraging, but not rocket-paced

A recent article in the New Scientist, Medibots: The world’s smallest surgeons, takes up the theme of nanobots with medical usage, and reports on some specific progress:

It was the 1970s that saw the arrival of minimally invasive surgery – or keyhole surgery as it is also known. Instead of cutting open the body with large incisions, surgical tools are inserted through holes as small as 1 centimetre in diameter and controlled with external handles. Operations from stomach bypass to gall bladder removal are now done this way, reducing blood loss, pain and recovery time.

Combining keyhole surgery with the da Vinci system means the surgeon no longer handles the instruments directly, but via a computer console. This allows greater precision, as large hand gestures can be scaled down to small instrument movements, and any hand tremor is eliminated…

There are several ways that such robotic surgery may be further enhanced. Various articulated, snake-like tools are being developed to access hard-to-reach areas. One such device, the “i-Snake”, is controlled by a vision-tracking device worn over the surgeon’s eyes…

With further advances in miniaturisation, the opportunities grow for getting medical devices inside the body in novel ways. One miniature device that is already tried and tested is a camera in a capsule small enough to be swallowed…

The 20-millimetre-long HeartLander has front and rear foot-pads with suckers on the bottom, which allow it to inch along like a caterpillar. The surgeon watches the device with X-ray video or a magnetic tracker and controls it with a joystick. Alternatively, the device can navigate its own path to a spot chosen by the surgeon…

While the robot could in theory be used in other parts of the body, in its current incarnation it has to be introduced through a keyhole incision thanks to its size and because it trails wires to the external control box. Not so for smaller robots under wireless control.

One such device in development is 5 millimetres long and just 1 millimetre in diameter, with 16 vibrating legs. Early versions of the “ViRob” had on-board power, but the developers decided that made it too bulky. Now it is powered externally, by a nearby electromagnet whose field fluctuates about 100 times a second, causing the legs to flick back and forth. The legs on the left and right sides respond best to different frequencies, so the robot can be steered by adjusting the frequency…

While the ViRob can crawl through tubes or over surfaces, it cannot swim. For that, the Israeli team are designing another device, called SwiMicRob, which is slightly larger than ViRob at 10 millimetres long and 3 millimetres in diameter. Powered by an on-board motor, the device has two tails that twirl like bacteria’s flagella. SwiMicRob may one day be used inside fluid-filled spaces such those within the spine, although it is at an earlier stage of development than ViRob.

Another group has managed to shrink a medibot significantly further – down to 0.9 millimetres by 0.3 millimetres – by stripping out all propulsion and steering mechanisms. It is pulled around by electromagnets outside the body. The device itself is a metal shell shaped like a finned American football and it has a spike on the end…

The Swiss team is also among several groups who are trying to develop medibots at a vastly smaller scale, just nanometres in size, but these are at a much earlier development stage. Shrinking to this scale brings a host of new challenges, and it is likely to be some time before these kinds of devices reach the clinic.

Brad Nelson, a roboticist at the Swiss Federal Institute of Technology (EHT) in Zurich, hopes that if millimetre-sized devices such as his ophthalmic robot prove their worth, they will attract more funding to kick-start nanometre-scale research. “If we can show small devices that do something useful, hopefully that will convince people that it’s not just science fiction.”

In summary: nanoscale medibots appear plausible, but there’s still a large amount of research and development required.

Kurzweil’s prediction on timescales

The book “The Scientific Conquest of Death“, containing Kurzweil’s essay, was published in 2004.  The online version is dated 2003.  In 2003, 2010 – the end of the decade – presumably looked a long way off.  In the essay, Kurzweil makes some predictions about the speed of progress towards Human Body Version 2.0:

By the end of this decade, computing will disappear as a separate technology that we need to carry with us.  We’ll routinely have high-resolution images encompassing the entire visual field written directly to our retinas from our eyeglasses and contact lenses (the Department of Defense is already using technology along these lines from Microvision, a company based in Bothell, Washington).  We’ll have very-high-speed wireless connection to the Internet at all times.  The electronics for all of this will be embedded in our clothing.  Circa 2010, these very personal computers will enable us to meet with each other in full-immersion, visual-auditory, virtual-reality environments as well as augment our vision with location- and time-specific information at all times.

Progress with miniaturisation of computers – and the adoption of smartphones – has been impressive since 2003.  However, it’s now clear that some of Kurzweil’s predictions were over-optimistic.  If his predictions for 2010 were over-optimistic, what should we conclude about his predictions for 2030?

The conflicting pace of technological progress

My own view of predictions is that they are far from “black and white”.  I’ve made my own share of predictions over the years, about the rate of progress with smartphone technologies.  I’ve also reflected on the fact that it’s difficult to draw conclusions about the rate of change.

For example, from my “Insight” essay from November 2006, “The conflicting pace of mobile technology“:

What’s the rate of improvement of mobile phones?  Disconcertingly, the answer is both “surprisingly fast” and “surprisingly slow”…

A good starting point is the comment made by Monitor’s Bhaskar Chakravorti in his book “The slow pace of fast change”, when he playfully dubbed a certain phenomenon as “Demi Moore’s Law”.  The phenomenon is that technology’s impact in an inner-connected marketplace often proceeds at only half the pace predicted by Moore’s Law.  The reasons for this slower-than-expected impact are well worth pondering:

  • New applications and services in a networked marketplace depend on simultaneous changes being coordinated at several different points in the value chain
  • Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make much less sense when viewed individually.

Sometimes this is called “the prisoner’s dilemma”.  It’s also known as “the chicken and egg problem”.

The most interesting (and the most valuable) smartphone services will require widespread joint action within the mobile industry, including maintaining openness to new ideas, new methods, and new companies.  It also requires a spirit of “cooperate before competing”.  If adjacent players in the still-formative smartphone value chain focus on fighting each other for dominance in our current small pie, it will prevent the stage-by-stage emergence of killer new services that will make the pie much larger for everyone’s benefit.

Thankfully, although the network effects of a complex marketplace can act to slow down the emergence of new innovations, while that market is still being formed, it can have the opposite effect once all the pieces of the smartphone open virtuous cycle have learned to collaborate with maximum effectiveness.  When that happens, the pace of mobile change can even exceed that predicted by Moore’s Law…

And from another essay in the same series, “A celebration of incremental improvement“, from February 2006:

We all know that it’s a perilous task to predict the future of technology.  The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars.  These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary.  Indeed, history is littered with curious and amusing examples of flawed predictions of the future.  You may well wonder, what’s different about smartphones, and about all the predictions made about them at 3GSM?

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects.  Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on.  Technology is not enough.  Especially for changes that are complex and demanding, no fewer than six other criteria should be satisfied as well:

  • The technological development has to satisfy a strong human need
  • The development has to be possible at a sufficiently attractive price to individual end users
  • The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle
  • There must be a clear evolutionary path whereby the eventual version of the technology can be attained through a series of incremental steps that are, individually, easier to achieve
  • When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be both open (to accept new ideas) and commercially attractive (to encourage the generation of new ideas, and, even more important, to encourage companies to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation)…

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications.  For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son.  The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century.  The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world!  As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As I said, the pace of technological development is far from being black-and-white.  Sometimes it proceeds slower than you expect, and at other times, it can proceed much quicker.

The missing ingredient

With the advantage of even more hindsight, there’s one more element that should be elevated, as frequently making the difference between new products arriving sooner and them arriving later: the degree of practical focus and effective priority placed by the relevant ecosystem on creating these products.  For medibots and other lifespan-enhancing technologies to move from science fiction to science fact will probably require changes in both public opinion and public action.

It’s All About Nanobots

In a famous scene from the movie, The Graduate, Benjamin’s mentor gives him career advice in a single word: “plastics.”  Today, that word might be “software,” or “biotechnology,” but in another couple of decades, the word is likely to be “nanobots.”  Nanobots—blood-cell-sized robots—will provide the means to radically redesign our digestive systems, and, incidentally, just about everything else.

In an intermediate phase, nanobots in the digestive tract and bloodstream will intelligently extract the precise nutrients we need, call for needed additional nutrients and supplements through our personal wireless local area network, and send the rest of the food we eat on its way to be passed through for elimination.

3 July 2008

Nanoscience and the mobile device: hopes and fears

Filed under: Morph, nanotechnology, Nokia, risks — David Wood @ 10:56 am

Nokia’s concept video of a future morphing mobile phone, released back in February, has apparently already been viewed more than two million times on YouTube. It’s a clever piece of work, simultaneously showing an appealing vision of future mobile devices and giving hints about how the underlying technology could work. No wonder it’s been popular.

So what are the next steps? I see that the office of Nokia’s CTO has now released a 5 page white paper that gives more of the background to the technologies involved, which are collectively known as nanotechnology. It’s available on Bob Iannucci’s blog, and it’s a fine read. Here’s a short extract:

After a blustery decade of hopes and fears (the fountain of youth or a tool for terrorists?), nanotechnology has hit its stride. More than 600 companies claim to use nanotechnologies in products currently on the market. A few interesting examples:

  • Stain-repellant textiles. A finely structured surface of embedded “nanowhiskers” keeps liquids from soaking into clothing—in the same way that some plant leaves keep themselves clean.
  • UV-absorbing sunscreen. Using nanoparticulate zinc oxide or titanium dioxide, these products spread easily and are fully transparent —while absorbing ultraviolet rays to prevent sunburn.
  • Purifying water filters. Aluminum oxide nanofibers with unusual bioadhesive properties are formulated into filters that attract and retain electronegative particles such as bacteria and viruses.
  • Windshield defoggers. A transparent lacquer of carbon nanotubes connects to the vehicle’s electrical source to evenly warm up the entire surface of the glass.

Even more interesting, to my mind, than the explanation of what’s already been accomplished (and what’s likely to be just around the corner), is a set of questions listed in the white paper. (In my view, the quality of someone’s intelligence is often shown more in the quality of the questions they ask than in the quality of the answers they give to questions raised by other people.) Here’s what the white paper says on this score:

As Nokia looks toward the mobile device of 2015 and beyond, our research teams, our partner academic institutions, and other industry innovators are finding answers to the following questions:

  1. What will be the form factors, functionalities, and interaction paradigms preferred by users in the future?
  2. How can the device sense the user’s behavior, physiological state, physical context, and local environment?
  3. How can we integrate energy-efficient sensing, computing, actuation, and communication solutions?
  4. How can we create a library of reliable and durable surface materials that enable a multitude of functions?
  5. How can we develop efficient power solutions that are also lightweight and wearable?
  6. How can we manufacture functional electronics and optics that are transparent and compliant?
  7. How can we move the functionality and intelligence of the device closer to the physical user interface?
  8. As we pursue these questions, how can we assess—and mitigate— possible risks, so that we introduce new technologies in a globally responsible manner?

That’s lots to think about! In response to the final question, one site that has many promising answers is the Center for Responsible Nanotechnology, founded by Mike Treder and Chris Phoenix. As he explains in his recent article “Nano Catastrophes“, Mike’s coming to Oxford later this month to attend a Conference on Global Catastrophic Risks, where he’ll be addressing these issues. I’ll be popping down that weekend to join the conference, and I look forward to reporting back what I find.

This is a topic that’s likely to run and run. Both the potential upsides and the potential downsides of nanotechnology are enormous. It’s well worth lots more serious research.

Create a free website or blog at WordPress.com.