dw2

11 September 2010

No escape from technology

Filed under: books, evolution, Kurzweil, UKH+ — David Wood @ 1:51 am

We can never escape the bio-technological nexus and get “back to nature” – because we have never lived in nature.

That sentence, from the final chapter of Timothy Taylor’s “The Artificial Ape: How technology changed the course of human evolution“, sums up one of my key takeaways from this fine book.

It’s a book that’s not afraid to criticise giants.  Aspects of Charles Darwin’s thinking are examined and found wanting.  Modern day technology visionary Ray Kurzweil also comes under criticism:

The claims of Ray Kurzweil (that we are approaching a critical moment when biology will be overtaken by artificial constructs) … lack a critical historical – and prehistoric – perspective…

Kurzweil argues that the age of machines is upon us …  and that technology is reaching a point where it can innovate itself, producing ever more complex forms of artificial intelligence.  My argument in this book is that, scary or not, none of this is new.  Not only have we invented technology, from the stone tools to the wheeled wagon, from spectacles to genetic engineering, but that technology, within a framework of some 2 to 3 million years, has, physically and mentally, made us.

Taylor’s book portrays the emergence of humanity as a grand puzzle.  From a narrow evolutionary perspective, humans should not have come into existence.  Our heads are too large. In many cases, they’re too large to pass through the narrow gap in their mother’s pelvis.  Theory suggests, and fossils confirm, that the prehistoric change from walking on all fours to walking upright had the effect of narrowing this gap in the pelvis.  The resulting evolutionary pressures should have resulted in smaller brains.  Yet, after several eons, the brain, instead, became larger and larger.

That’s just the start of the paradox.  The human baby is astonishingly vulnerable.  Worse, it makes its mother increasingly vulnerable too.  How could “survival of the fittest” select this ridiculously unfit outcome?

Of course, a larger brain has survival upsides as well as survival downsides.  It enables greater sociality, and the creation of sophisticated tools, including weapons.  But Taylor marshalls evidence that suggests that the first use of tools by pre-humans long pre-dated the growth in head size.  This leads to the suggestion that two tools, in particular, played vital roles in enabling the emergence of the larger brain:

  • The invention of a slings, made from fur, that enabled mothers to carry their infants hands-free
  • The invention of cooking, with fire, that made it easier for nourishment to be quickly obtained from food.

To briefly elaborate the second point: walking upright means the digestive gut becomes compressed.  It becomes shorter.  There’s less time for nourishment to be extracted from food.  Moreover, a larger head increases the requirements for fast delivery of nourishment.  Again, from a narrow evolutionary point of view, the emergence of big-brained humans makes little sense.  But cooking comes to the rescue.  Cooking, along with the child-carrying sling, are two examples of technology that enable the emergence of humans.

The resulting creatures – us – are weaker in a pure biological sense that our evolutionary forebears.  Without our technological aides, we would fare poorly in any contest of survival with other apes.  It is only the combination of technology-plus-nature that makes us stronger.

We’re used to thinking that the development of tools took place in parallel with increasing pre-human intelligence.  Taylor’s argument is that, in a significant way, the former preceded the latter.  Without the technology, the pre-human brain could not expand.

The book uses this kind of thinking to address various other puzzles:

  • For example, the technology-impoverished natives from the tip of South America that Darwin met on his voyage of discovery on the Beagle, had eyesight that was far better than even the keenest eyed sailor on the ship.  Technological progress went hand-in-hand with a weakening of biological power.
  • Taylor considers the case of the aborigines of Tasmania, who were technologically backward compared to those of mainland Australia: they lacked all clothing, and apparently could not make fire for themselves.  The archeological record indicates that the Tasmanian aborigines actually lost the use of various technologies over the course of several millenia.  Taylor reaches a different conclusion from popular writer Jared Diamond, who seems to take it for granted that this loss of technology made the aborigines weaker.  Taylor suggests that, in many ways, these aborigines became stronger and fitter, in their given environment, as they abandoned their clothing and their fishing tools.

There are many other examples – but I’ll leave it to you to read the book to find out more.  The book also has some fascinating examples of ancient tools.

I think that Taylor’s modifications of Darwin’s ideas are probably right.  What of his modifications of Kurzweil’s ideas?  Is the technological spurt of the present day really “nothing new”?  Well, yes and no.  I believe Kurzweil is correct to point out that the kinds of changes that are likely to be enabled by technology in the relatively near future – perhaps in the lifetime of many people who are already alive – are qualitatively different from anything that has gone before:

  • Technology might extend our lifespans, not just by a percentage, but by orders of magnitude (perhaps indefinitely)
  • Technology might create artificial intelligences that are orders of magnitude more powerful than any intelligence that has existed on this planet so far.

As I’ve already mentioned in my previous blogpost – which I wrote before starting to read Taylor’s book – Timothy Taylor is the guest speaker at the September meeting of the UK chapter of Humanity+.  People who attend will have the chance to hear more details of these provocative theories, and to query them direct with the author.  There will also be an opportunity to purchase signed copies of his book.  I hope to see some of you there!

I’ll give the last words to Dr Taylor:

Technology, especially the baby-carrying sling, allowed us to push back our biological limits, trading in our physical strength for an increasingly retained infantile early helplessness that allowed our brains to expand, forming themselves under increasingly complex artificial conditions…  In terms of brain growth, the high-water mark was passed some 40,000 years ago.  The pressure on that organ has been off ever since we started outsourcing intelligence in the form of external symbolic storage.  That is now so sophisticated through the new world information networking systems that what will emerge in future may no longer be controlled by our own volition…

[Technology] could also destroy our planet.  But there is no back-to-nature solution.  There never has been for the artificial ape.

29 August 2010

Understanding humans better by understanding evolution better

Filed under: collaboration, deception, evolution, RSA, UKH+ — David Wood @ 5:54 am

Many aspects of human life that at first seem weird and hard to explain can make a lot more sense once you see them from the viewpoint of evolution.

It was Richard Dawkins’ book “The Selfish Gene” which first led me to that conclusion, whilst I was still at university.  After “The Selfish Gene”, I read “Sociobiology: the new synthesis“, by E.O. Wilson, which gave other examples.  I realised it was no longer necessary to refer to concepts such as “innate wickedness” or “original sin” to explain why people often did daft things.  Instead, people do things because (in part) of underlying behavioural patterns which tended to make their ancestors more likely to leave successful offspring.

In short, you can deepen your understanding of  humans if you understand evolution.  On the whole, attempts to get humans to change their behaviour will be more likely to succeed if they are grounded in an understanding of the real factors that led humans to tend to behave as they do.

What’s more, you can understand humans better if you understand evolution better.

In a moment, I’ll come to some interesting new ideas about the role played by technology in evolution.  But first, I’ll mention two other ways in which an improved understanding of evolution sheds richer light on the human condition.

1. Evolution often results in sub-optimal solutions

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

I’ve written on this subject before, in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind” – so I won’t say much more about that particular topic right now.  But I can’t resist including a link to a fascinating video in which Richard Dawkins demonstrates the absurdly non-optimal route taken by the laryngeal nerve of the giraffe.  As Dawkins says in the video, this nerve “is a beautiful example of historical legacy, as opposed to design”.  If you haven’t seen this clip before, it’s well worth watching, and thinking about the implications.

2. Evolution can operate at multiple levels

For a full understanding of evolution, you have to realise it can operate at multiple levels:

  • At the level of individual genes
  • At the level of individual organisms
  • At the level of groups of cooperating organisms.

At each level, there are behaviours which exist because they made it more likely for an entity (at that level) to leave descendants.  For example, groups of animals tend to survive as a group, if individuals within that group are willing, from time to time, to sacrifice themselves for the sake of the group.

The notion of group selection is, however, controversial among evolutionary theorists.  Part of the merit of books such as The Selfish Gene was that it showed how altruistic behaviour could be explained, in at least some circumstances, by looking at the point of survival of individual genes.  If individual A sacrifices himself for the sake of individuals B and C within the same group, it may well be that B and C carry many of the same genes as individual A.  This analysis seems to deal with the major theoretical obstacle to the idea of group selection, which is as follows:

  • If individuals A1, A2, A3,… all have an instinct to sacrifice themselves for the sake of their wider group, it may well mean, other things being equal, that this group is initially more resilient than competing groups
  • However, an individual A4 who is individually selfish, within that group, will get the benefit of the success of the group, and the benefit of individual survival
  • So, over time, the group will tend to contain more individuals like the “free-rider” A4, and fewer like A1, A2, and A3
  • Therefore the group will degenerate into selfish behaviour … and this shows that the notion of “group selection” is flawed.

Nevertheless, I’ve been persuaded by writer David Sloan Wilson that the notion of group selection can still apply.  He gives an easy-to-read account of his ideas in his wide-ranging book “Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives“.  In summary:

  • Group selection can apply, provided the group also has mechanisms to reduce free-riding behaviour by individuals
  • For example, people in the group might have strong instincts to condemn and punish people who try to take excess advantage of the generosity of others
  • So long as these mechanisms keep the prevalence of free-riding below a certain threshold, a group can reach a stable situation in which the altruism of the majority continues to benefit the group as a whole.

(To be clear: this kind of altruism generally looks favourably only at others within the same group.  People who are outside your group won’t benefit from it.  An injunction such as “love your neighbour as yourself” applied in practice only to people within your group – not to people outside it.)

To my mind, this makes sense of a great deal of the mental gymnastics that we can observe: people combine elements of surreptitiously trying to benefit themselves (and their own families) whilst seeking to appearing to the group as a whole as being “good citizens”.  In turn, we are adept at seeing duplicity and hypocrisy in others.  There’s been a long “arms race” in which brains have been selected that are better at playing both sides of this game.

Incidentally, for another book that takes an entertaining and audacious “big picture” view of evolution and group selection, see the barn-storming “The Lucifer Principle: A Scientific Expedition into the Forces of History” by Howard Bloom.

3. The role of technology in evolution

At first sight, technology has little to do with evolution.  Evolution occurred in bygone times, whilst technology is a modern development – right?

Not true. First, evolution is very much a present-day phenomenon (as well as something that has been at work throughout the whole history of life).  Diseases evolve rapidly, under pressures of different regimes of anti-bacterial cocktails.  And there is evidence that biological evolution still occurs for humans.  A 2009 article in Time magazine was entitled “Darwin Lives! Modern Humans Are Still Evolving“.  Here’s a brief extract:

One study, published in PNAS in 2007 and led by John Hawks, an anthropologist at the University of Wisconsin at Madison, found that some 1,800 human gene variations had become widespread in recent generations because of their modern-day evolutionary benefits. Among those genetic changes, discovered by examining more than 3 million DNA variants in 269 individuals: mutations that allow people to digest milk or resist malaria and others that govern brain development.

Second, technology is itself an ancient phenomenon – including creative use of sticks and stones.  Benefits of very early human use of sticks and stones included fire, weapons, and clothing.  What’s more, the advantages of use of tools allowed a strange side-effect in human genetic evolution: as we became technologically stronger, we also became biologically weaker.  The Time magazine article mentioned above goes on to state the following:

According to anthropologist Peter McAllister, author of “Manthropology: the Science of Inadequate Modern Man“, the contemporary male has evolved, at least physically, into “the sorriest cohort of masculine Homo sapiens to ever walk the planet.” Thanks to genetic differences, an average Neanderthal woman, McAllister notes, could have whupped Arnold Schwarzenegger at his muscular peak in an arm-wrestling match. And prehistoric Australian Aborigines, who typically built up great strength in their joints and muscles through childhood and adolescence, could have easily beat Usain Bolt in a 100-m dash.

Timothy Taylor, Reader in Archaeology at the University of Bradford and editor-in-chief of the Journal of World Prehistory, tackles this same topic in his recent book “The Artificial Ape: How Technology Changed the Course of Human Evolution“.

Amazon.com describes this book as following:

A breakthrough theory that tools and technology are the real drivers of human evolution.

Although humans are one of the great apes, along with chimpanzees, gorillas, and orangutans, we are remarkably different from them. Unlike our cousins who subsist on raw food, spend their days and nights outdoors, and wear a thick coat of hair, humans are entirely dependent on artificial things, such as clothing, shelter, and the use of tools, and would die in nature without them. Yet, despite our status as the weakest ape, we are the masters of this planet. Given these inherent deficits, how did humans come out on top?

In this fascinating new account of our origins, leading archaeologist Timothy Taylor proposes a new way of thinking about human evolution through our relationship with objects. Drawing on the latest fossil evidence, Taylor argues that at each step of our species’ development, humans made choices that caused us to assume greater control of our evolution. Our appropriation of objects allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.

Weaving together lively discussions of major discoveries of human skeletons and artifacts with a reexamination of Darwin’s theory of evolution, Taylor takes us on an exciting and challenging journey that begins to answer the fundamental question about our existence: what makes humans unique, and what does that mean for our future?

In an interview in the New Scientist, Timothy Taylor gives more details of his ideas:

Upright female hominins walking the savannah had a real problem: their babies couldn’t cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female – higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn’t survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don’t last: things made from sinew, wood, leather and grasses…

Once you have slings to carry babies, you have broken a glass ceiling – it doesn’t matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb – they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment…

I’ve ordered Taylor’s book from Amazon and I expect it to be waiting for me at my home in the UK once I return from my current trip in Asia.  I’m also looking forward to hosting a discussion meeting on Saturday 11th Sept under the auspices of Humanity+ UK in London, where Timothy Taylor himself will be the main speaker. People on Facebook can register their interest in this meeting by RSVPing here.  There’s no charge to attend.

Another option to see Timothy Taylor lecture in person – for those able to spare time in the middle of the day on a Thursday (9th Sept) – will be at the RSA.  I expect there will be good discussion at both events, but the session at H+UK is longer (two hours, as opposed to just one at the RSA), and I expect more questions there about matters such as the likely role of technology radically re-shaping the future development of humans.

Footnote: of course, the fact that evolution guided our ancestors to behave in certain ways is no reason for us to want to continue to behave in these ways.  But understanding the former is, in my view, very useful background knowledge for being to devise practical measures to change ourselves.

27 August 2010

Reconsidering recruitment

Filed under: Accenture, Psion, recruitment, Symbian — David Wood @ 5:12 am

The team at ITjoblog (‘the blog for IT professionals’) recently asked me to write a guest column for them.  It has just appeared: “Reconsidering recruitment“.

With a few slight edits, here’s what I had to say…

Earlier in my career, I was involved in lots of recruitment.  The software team inside Psion followed a steep headcount trajectory through the process of transforming into Symbian, and continued to grow sharply in subsequent years as many new technology areas were added to the scope of Symbian OS.  As one of the senior software managers in the company throughout this period, I found myself time and again in interviewing and recruitment situations.  I was happy to give significant amounts of my time to these tasks, since I knew what a big impact good (or bad) recruitment can make to organisational dynamics.

In recent weeks, I’ve once again found myself in a situation where considerable headcount growth is expected.  I’m working on a project at Accenture, assisting their Embedded Mobility Services group.  Mobile is increasingly a hot topic, and there’s strong demand for people providing expert consuItancy in a variety of mobile development project settings. This experience has led me to review my beliefs about the best way to carry out recruitment in such situations.  Permit me to think aloud…

To start with, I remain a huge fan of graduate recruitment programs.  The best graduates bring fire in their bellies: a “we can transform the world” attitude that doesn’t know what’s meant to be impossible – and often carries it out!  Of course, graduates typically take some time before they can be deployed in the frontline of commercial software development.  But if you plan ahead, and have effective “bootcamp” courses, you’ll have new life in your teams soon enough.  There will be up-and-coming stars ready to step into the shoes left by any unexpected staff departures or transfers.  If you can hire a group of graduates at the same time, so much the better.  They can club together and help each other, sharing and magnifying what they each individually learn from their assigned managers and mentors.  That’s the beauty of the network effect.

That’s just one examples of the importance of networks in hiring.  I place a big value on having prior knowledge of someone who is joining your team.  Rather than having to trust your judgement during a brief interviewing process, and whatever you can distill from references, you can rely on actual experience of what someone is like to work with.  This effect becomes more powerful when several of your current workforce can attest to the qualities of a would-be recruit, based on all having worked together at a previous company in the past.  I saw Symbian benefit from this effect via networks of former Nortel employees who all knew each other and who could vouch for each others’ capabilities during the recruitment process.  Symbian also had internal networks of former high-calibre people from SCO, and from Ericsson, among other companies.  The benefit here isn’t just that you know that someone is a great professional.  It’s that you already know what their particular special strengths are.  (“I recommend that you give this task to Mike.  At our last company, he did a fantastic job of a similar task.”)

Next, I recommend hiring for flexibility, rather than simply trying to fit a current task description.  I like to see evidence of people coping with ambiguity, and delivering good results in more than one kind of setting.  That’s because projects almost always change; likewise for organisational structures.  So while interviewing, I’m not trying to assess if the person I’m interviewing is the world expert in, say, C++ templates.  Instead, I’m looking for evidence that they could turn their hand to mastering whole new skill areas – including areas that we haven’t yet realised will be important to future projects.

Similarly, rather than just looking for rational intelligence skills, I want to see evidence that someone can fit well into teams.  “Soft skills”, such as inter-personal communication and grounded optimism, aren’t just an optional extra, even for roles with intense analytic content.  The best learning and the best performance comes from … networks (to use that word again) – but you can’t build high-functioning networks if your employees lack soft skills.

Finally, high-performing teams that address challenging problems benefit from internal variation.  So don’t just look for near-clones of people who already work for you.  When scanning CVs, keep an eye open for markers of uniqueness and individuality.  At interview, these markers provide good topics to explore – where you can find out something of the underlying character of the candidate.

Inevitably, you’ll sometimes make mistakes with recruitment, despite taking lots of care in the process.  To my mind, that’s OK.  In fact, it’s better to take a few risks, since you can find some excellent new employees in the process.  But you need to have in place a probation period, during which you pay close attention to how your hires are working out.  If a risky candidate turns out disappointing, even after some coaching and support, then you should act fast – for the sake of everyone concerned.

In summary, I see recruitment and induction as a task that deserves high focus from some of the most skilled and perceptive members of your existing workforce.  Skimp on these tasks and your organisation will suffer – sooner or later.  Invest well in these tasks, and you should see the calibre of your workforce steadily grow.

For further discussion, let me admit that rules tend to have limits and exceptions.  You might find it useful to identify limits and counter-examples to the rules of thumb I’ve outlined above!

15 August 2010

Co-existing with Android

Filed under: Android, Nokia — David Wood @ 9:51 pm

For some time, I’ve wanted to learn more about Android.

Not just theory, or second hand knowledge.  I wanted my own, direct, practical knowledge – obtained by using an Android device “in anger” (as the saying goes).

Playing with a device for a few minutes – for example, at a trade show – fails to convey many of the real-world strengths and weaknesses of that device.  But it’s the real-world strengths and weaknesses that I want to experience.

It’s important for me for work reasonsAccenture Embedded Mobility Services are involved in a stream of different Android projects.  (Among other things, I want to be able to install and use various experimental Android apps that some of my colleagues have been writing.)

It’s also important for me for personal productivity reasons.  If an Android phone turns out to be a smarter phone than any I’ve been using so far, I want to know about it – so I can use it more often, and become a smarter person as a result.

But there are sooo many Android devices.  Carphone Warehouse had a large selection to choose between.  For a while, I struggled to decide which one to pick.

In the end, I chose a Nexus One.  That’s because it is the device most likely to be quickly updated to whatever the latest version of Android is.  (Other Android phones include customisation layers from device manufacturers, which seem to need to be re-done – and painstakingly re-tested – whenever there’s a new Android version.  Unsurprisingly, that introduces a delay.)

For help with a Nexus One, I owe a big debt of gratitude to Kenton Price of Little Fluffy Toys Ltd.  I first met Kenton at a recent meeting of the London GTUG (Google Technology Users Group), where we both listened to Google’s Wesley Chun give an upbeat, interesting talk about Google App Engine.  Later that evening, we got talking.  A few days afterwards, Little Fluffy Toys became famous, on account of widespread publicity for their very timely London Cycle Hire Widget.  Kenton & I exchanged a few more emails, and the outcome was that we met in a coffee shop next to the Accenture building in Old Bailey.  Kenton kindly leant me a Nexus One for a few weeks, for me to find out how I get on with it.  Just as important, Kenton quickly showed me a whole raft of fascinating things that the device could do.

But then I got cold feet.  Did I really want to stop using the Nokia E72, which has been my “third brain” for the best part of a year? My fingers have learned all kinds of quick, useful methods for me to get the best out of this device.  (Many of these methods are descendents of usage patterns from even earlier devices in the same general family, including the E71 and the E61i.)  I also heard from many people that the battery life on the Nexus One was poor.  What’s more, during the very first proper phone call I did with this phone, the person at the other end told me several times “you’re breaking up – I can’t hear you”.  (Of course, a sample size of one proves nothing.)

It was the transfer of all my “phonebook contacts” from the E72, to be merged (apparently) with my email contacts on the Nexus One, that gave me even more reason to hesitate.  I wasn’t sure I was ready for that kind of potential restructuring of my personal data.

So I’ve compromised.  I already have two SIMs.  One lives in my E72, and the other usually sits inside my laptop.  Well, I’ve taken the SIM from the laptop and put it into the Nexus One.  For the time being, I’ll keep using the E72 for phone calls and text messages.  And probably for lots more too.  But I’ll use the Nexus One for lots of interesting experiments.  (Like showing friends and family members Google Goggles…).

I expect this usage pattern will change over the weeks ahead.  Let’s see how things evolve!

Earlier this evening, I used my E72 to take the following picture of the Nexus One perched next to my “second brain” – my Psion Series 5mx.  Hmm, that Nexus One battery indicator does look worryingly low.  (Maybe I should turn down the screen brightness…)

Seeing probabilities

Filed under: aging, risks, Ultralase — David Wood @ 12:59 am

I thought of entitling this blogpost “Blinded by technology”.  Or, perhaps, “Almost blinded by technology”.  But that would have been unfair.

It’s now just over five weeks since I had my eyes lasered at the Ultralase clinic in Guildford, Surrey.  For more than 40 years, I had worn spectacles, to correct short sightedness.  My hope with the surgery was that I could dispense with spectacles and all the inconvenience that goes with them.

I had an idea what to expect.  Back in 2005, my wife had a similar operation, also from Ultralase, and has been very happy with the result.  I remember her being pleased with the outcome just a few moments after the operation, when, from the room next to the operating theatre, I could hear her excited voice on opening her eyes.  But my own experience turned out different.

One complicating factor is that I received a treatment called “monovision”, in which the two eyes are given treatments that optimise them for different viewing tasks.  My left eye was optimised for short-distance reading (such as computer screens, books, phone screens).  My right eye was optimised for medium-distance and long-distance.

The rationale for monovision is to address a decline in the power of eyes to change the distance where they’re focussing.  This is a condition called “Presbyopia” – sometimes known as “Aging eye”.  To quote from “The Eye Digest“:

A presbyopic eye loses its innate ability to clearly see all objects that are located at different distances. It can see some objects clearly but not all. In individuals who are less than 40 years of age, the eye can be thought of as an ‘auto-focus’  cameras. In an auto-focus camera, all one has to do to get sharp pictures is to point the camera in that direction, the auto-focus mechanism kicks in and you get sharp pictures. After age 40, the presbyopic eye can be thought of as a ‘fixed-focus’ camera. Fixed-focus cameras, the most basic of all cameras, have a nonadjustable lens. In general, a fixed-focus camera can take satisfactory photographs but it may produce a blurred picture if the subject is moving or is less than 6 feet (1.8 meters) away.

The presbyopic eye is also in a more or less ‘fixed-focus’ state. This means that a presbyopic eye will see clearly only at a particular distance. If you correct the presbyopic eye for distance with glasses or contact lenses, then it will clearly see all the distant objects and may read 20/20 on the distance vision eye chart, but there is no way it would be able to clearly read up-close with the distance vision correction. On the other hand if you correct the eye for reading up-close, then you will be able to read clearly, but there is no way you will be able to see distance objects clearly with the same correction. So reading vision is at the cost of distance vision and vice versa.

And as Wikipedia puts it:

Presbyopia is a health condition where the eye  exhibits a progressively diminished ability to focus on near objects with age. Presbyopia’s exact mechanisms are not known with certainty; the research evidence most strongly supports a loss of elasticity of the crystalline lens, although changes in the lens’s curvature from continual growth and loss of power of the ciliary muscles (the muscles that bend and straighten the lens) have also been postulated as its cause.

Similar to grey hair and wrinkles, presbyopia is a symptom caused by the natural course of aging. The first symptoms (described below) are usually first noticed between the ages of 40-50. The ability to focus on near objects declines throughout life, from an accommodation of about 20 dioptres (ability to focus at 50 mm away) in a child, to 10 dioptres at 25 (100 mm), and levels off at 0.5 to 1 dioptre at age 60 (ability to focus down to 1–2 meters only).

The word presbyopia comes from the Greek word presbys (πρέσβυς), meaning “old man” or “elder”, and the Neolatin suffix -opia, meaning “sightedness”.

I can’t deny it: by these measures, I’m aging!  I turned 51 in February.  And I have presbyopia to show for my age.   (Not to mention wrinkles…)

Monovision is one of the options offered to patients with presbyopia.  Not everyone copes well with monovision treatment.  Apparently, some people get headaches, from the two eyes having different preferred focal lengths.  For this reason, Ultralase gave me special spectacles to wear, as an experiment, for six weeks before the intended date of the operation.  These spectacles mimicked the intended outcome of the operation: left eye great for short-distance, right eye great for everything else.  Happily, I had no headache, and was pleased with how these spectacles worked for me.

So I approached the operation itself with high hopes.  And I can report that my left eye has turned out exactly as hoped.  Without glasses, my short-range sight is excellent.

But  my right eye has ended up in a less optimal state.  Subsequent tests by Ultralase, repeated on several occasions, confirm that my right eye is about -0.75 compared to what was intended.  When I look into the middle distance or long distance, without wearing glasses, I see things as much fuzzier than before (when I wore glasses).  To see things more clearly, I have to squint, or stand up and walk closer.  In practical terms, it causes inconvenience when I’m in meetings at work.  I can’t see what’s displayed on screens in conference rooms.  I sometime struggle to see the prices on the menus behind the counter at coffee shops.  And so on.

But to say that I have literally been “blinded by technology” (by the short blast of a laser) would be putting things much too strongly.  I can get by fine, most of the time.

Nor was I figuratively “blinded by technology” – in the sense of being naively over-optimistic about the outcome of a technical fix to address the symptoms of aging.  The Ultralase surgeon had carefully explained matters to me before the operation.  He even got me to fill in some blank paragraphs in a form, using my own words to confirm that I understood the risks associated with the surgery.  One blank paragraph was headed, “Four risks with the operation”.  Another was headed, “How will I cope, if the treatment doesn’t work as well as I hope”.  It was sobering.

I knew, before the operation, that there was a one-in-six chance that I would need a “top up” operation six months (or so) further down the line.  And that looks like what will happen to me.  The risks were significantly higher in my case than for most patients, because of the monovision treatment, and because my eyesight was starting from such a poor threshold (around -8.0).

Medical treatments frequently involve probabilities.  As with many other difficult decisions in life, it’s important to be able to understand probabilities, and to plan ahead for possible unwanted outcomes.

It’s still possible that my right eye will continue to improve by itself.  I read of cases where it took several months, after laser eye surgery, for an eye to completely settle down.  That’s why Ultralase require several months of stability in eyesight before doing any follow-up surgery.  My current guess is that I’ll be visiting the surgery again some time around January.  In the meantime, I’m putting up with some haziness in my middle-distance and long-distance vision.

Has this experience changed my attitude towards the wonder-powers of technology (for example, to address the problems of aging)?

Not really.  I already know, viscerally, from my many years in the hi-tech smartphone industry, that technical solutions frequently fail.  A team can have many thoughtful, experienced, super-smart people, developing new technology in a careful way, but still the results can go wrong.  You can take measures to try to reduce risks, but you can’t make all the risks go away.  And, in many cases,  you shouldn’t seek to make all the risks go away.  That way, you’d miss out many benefits from when risky projects turn out good.  But you should be aware of the risks beforehand, and try to quantify them.

For me, a one in six chance of needing the inconvenience of a second operation was a risk well worth taking.  And I still see things that way.

31 May 2010

Walking the AppCircus jury high-wire

Filed under: applications — David Wood @ 10:00 am

Congratulations to Rudy De Waele for pulling together an impressive international jury to review entries for the AppCircus taking place at Mobile2.0 EU in Barcelona in the middle of June.

The AppCircus is described as follows:

  • A unique global traveling showcase of innovative apps
  • Your chance to get a 3 minute slot and present your app in front of peer developers and industry experts during the Mobile 2.0 Europe conference. The conference audience will nominate one app for the prestigious Mobile Premier Awards 2011 during the Mobile World Congress
  • Participation is FREE and open to start ups and individual developers, so apply now and walk the high-wire!

If you’re interested to submit a mobile app, note that the deadline for submission is June 7th, midnight local time CET.

I’m honoured to be one of the jury members whose task will be to select the finalists out of all the submissions.  You can see the complete list of jury members here.

From past experience, I know that selecting finalists for mobile app contests is a tricky task, so I’m not entirely looking forward to it.  It’s a hire-wire balancing act for the jury members, as well as for the finalists.

30 May 2010

I’m joining Accenture Embedded Mobility Services

Filed under: Accenture — David Wood @ 10:42 pm

I’m very happy to report that, from 1st June, I’ll be joining Accenture Embedded Mobility Services (AEMS).

I don’t intend to say much about my work for Accenture in this blog.  However, I’ll make a brief exception on this occasion.

Over the last few months, I’ve had the opportunity to look at what Accenture is doing in the Embedded and Mobility space.  I’ve been greatly impressed by what I’ve seen of the vision, the accomplishment, and the potential.  I’m looking forward to helping to shape the future evolution of this unit.  As smart connected devices become ever more pervasive, with numerous uses across all sorts of different sectors of life, the need for rich expertise is going to grow and grow – to improve the chances of success for projects that create, utilise, and enhance these devices.  This is an expertise that AEMS already possesses in large measure.

I’m particularly looking forward to renewing working relations with many former colleagues from Symbian Ltd and even Psion PLC days – colleagues who formed part of the Symbian Technical Consulting / Professional Services group which was acquired by Accenture from Nokia last year, constituting the nucleus of AEMS.  These are people I hold in high regard – people with deep skills and integrity.  I’m also looking forward to getting to know lots of new colleagues.

I’ll be based in London but expect I’ll often be travelling.  I’m going to be busy on Accenture projects, but I plan to retain an involvement in some external projects too – projects such as the book I’m writing and various speaking engagements.

19 May 2010

Chapter finished: A journey with technology

Five more days have passed, and I’ve completed another chapter draft (see snapshot below) of my proposed new book.

This takes me up to 30% of what I hope to write:

  • I’ve drafted three out of ten planned chapters.
  • The wordcount has reached 15,000, out of a planned total of 50,000.

After this, I plan to dig more deeply into specific technology areas.  I’ll be moving further out of my comfort area.  First will be “Health”.  Fortuitously, I spent today at an openMIC meeting in Bath, entitled “i-Med: Serious apps for mobile healthcare”.  That provided me with some useful revision!

========

3. A journey with technology

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

Here’s the key question I want to start answering in this chapter: how quickly can technology progress in the next few decades?

This is far from being an academic question. At heart, I want to know whether it’s feasible for that progress to be quick enough to provide technological solutions to the calamitous issues and huge opportunities described in the first chapter of this book. The progress must be quick enough, not only for core technological research, but also for productisation of that technology into the hands of billions of consumers worldwide.

For most of this book, I’ll be writing about technologies from an external perspective. I have limited direct experience with, for example, the healthcare industry and the energy industry. What I have to say about these topics will be as, I hope, an intelligent outside observer. But in this chapter, I’m able to adopt an internal perspective, since the primary subject matter is the industry where I worked for more than twenty years: the smartphone industry.

In June 1988, I started work in London at Psion PLC, the UK-based manufacturer of electronic organisers. I joined a small team working on the software for a new generation of mobile computers. In the years that followed, I spent countless long days, long nights and (often) long weekends architecting, planning, writing, integrating, debugging and testing Psion’s software platforms. In due course, Psion’s software would power more than a million PDAs in the “Series 3” family of devices. However, the term “PDA” was unknown in 1988; likewise for phrases like “smartphone”, “palmtop computer”, and “mobile communicator”. The acronym “PDA”, meaning “personal digital assistant”, was coined by Apple in 1992 in connection with their ambitious but flawed “Newton” project – long before anyone conceived of the name “iPhone”.

I first became familiar with the term “smartphone” in 1996, during early discussions with companies interested in using Psion’s “EPOC32” software system in non-PDA devices. After a faltering start, these discussions gathered pace. In June 1998, ten years after I had joined Psion, a group of Psion senior managers took part in the announcement of the formation of a new entity, Symbian Ltd, which had financial backing from the three main mobile phone manufacturers of the era – Ericsson, Motorola, and Nokia. Symbian would focus on the software needs of smartphones. The initial software, along with 150 employees led by a 5 man executive team, was contributed by Psion. In the years that followed, I held Symbian executive responsibility, at different times, for Technical Consulting, Partnering, and Research. In due course, sales of devices based on Symbian OS exceeded 250 million devices.

In June 2008 – ten more years later, to the day – another sweeping announcement was made. The source code of Symbian OS, along with that of the S60 UI framework and applications from Nokia, would become open source, and would be overseen by a new independent entity, the Symbian Foundation.

My views on the possibilities for radical improvements in technology as a whole are inevitably coloured by my helter-skelter experiences with Psion and Symbian. During these 20+ years of intense projects following close on each others’ heels, I saw at first hand, not only many issues with developing and productising technology, but also many issues in forecasting the development and productisation of technology.

For example, the initial June 1998 business plans for Symbian are noteworthy both for what we got right, and for what we got wrong.

3.1 Successes and shortcomings in predicting the future of smartphones

In June 1998, along with my colleagues on the founding team at Symbian, I strove to foresee how the market for smartphones would unfold in the years ahead. This forecast was important, as it would:

  • Guide our own investment decisions
  • Influence the investment decisions of our partner companies
  • Set the context for decisions by potential employees whether or not to join Symbian (and whether or not to remain with Symbian, once they had joined).

Many parts of our vision turned out correct:

  • There were big growths in interest in computers with increased mobility, and in mobile phones with increased computing capability.
  • Sales of Symbian-powered mobile devices would, by the end of the first decade of the next century, be measured in 100s of millions.
  • Our phrase, “Smartphones for all”, which initially struck many observers as ridiculous, became commonplace: interest in smartphones stopped being the preserve of a technologically sophisticated minority, and became a mainstream phenomenon.
  • Companies in numerous industries realised that they needed strong mobile offerings, to retain their relevance.
  • Rather than every company developing its own smartphone platform, there were big advantages for companies to collaborate in creating shared standard platforms.
  • The attraction of smartphones grew, depending on the availability of add-on applications that delivered functionality tailored to the needs of individual users.

Over the next decade, a range of new features became increasingly widespread on mobile phones, despite early scepticism:

  • Colour screens
  • Cameras – and video recorders
  • Messaging: SMS, simple email, rich email…
  • Web browsing: Google, Wikipedia, News…
  • Social networking: Facebook, Twitter, blogs…
  • Games – including multiplayer games
  • Maps and location-based services
  • Buying and selling (tickets, vouchers, cash).

By 2010, extraordinarily powerful mobile devices are in widespread use in almost every corner of the planet. An average bystander transported from 1998 to 2010 might well be astonished at the apparently near-magical capabilities of these ubiquitous devices.

On the other hand, many parts of our 1998 vision proved wrong.

First, we failed to foresee many of the companies that would be the most prominent in the smartphone industry by the end of the next decade. In 1998:

  • Apple seemed to be on a declining trajectory.
  • Google consisted of just a few people working in a garage. (Like Symbian, Google was founded in 1998.)
  • Samsung and LG were known to the Symbian team, but we decided not to include them on our initial list of priority sales targets, in view of their lowly sales figures.

Second, although our predictions of eventual sales figures for Symbian devices were broadly correct – namely 100s of millions – this was the result of two separate mistakes cancelling each other out:

  • We expected to have a higher share of the overall mobile phone market (over 50% – perhaps even approaching 100%).
  • We expected that overall phone market to remain at the level of 100s of millions per annum – we did not imagine it would become as large as a billion per year.

(A smaller-than-expected proportion of a larger-than-expected market worked out at around the same volume of sales.)

Third – and probably most significant for drawing wider lessons – we got the timescales significantly wrong. It took considerably longer than we expected for:

  • The first successful smartphones to become available
  • Next generation networks (supporting high-speed mobile data) to be widely deployed
  • Mobile applications to become widespread.

Associated with this, many pre-existing systems remained in place much longer than anticipated, despite our predictions that they would fail to be able to adapt to changing market demands:

  • RIM sold more and more BlackBerries, despite repeated concerns that their in-house software system would become antiquated.
  • The in-house software systems of major phone manufacturers, such as Nokia’s Series 40, likewise survived long past predicted “expiry” dates.

To examine what’s going on, it’s useful to look in more detail at three groups of factors:

  1. Factors accelerating growth in the smartphone market
  2. Factors restricting growth in the smartphone market
  3. Factors that can overcome the restrictions and enable faster growth.

Having reviewed these factors in the case of smartphone technology, I’ll then revisit the three groups of factors, with an eye to general technology.

3.2 Factors accelerating growth in the smartphone market

The first smartphone sales accelerator is decreasing price. Smartphones increase in popularity because of price reductions. As the devices become less expensive, more and more people can afford them. Other things being equal, a desirable piece of consumer electronics that has a lower cost will sell more.

The underlying cost of smartphones has been coming down for several reasons. Improvements in underlying silicon technology mean that manufacturers can pack more semiconductors on to the same bit of space for the same cost, creating more memory and more processing power. There are also various industry scale effects. Companies who work with a mobile platform over a period of time gain the benefit of “practice makes perfect”, learning how to manage the supply chain, select lower price components, and assemble and manufacture their devices at ever lower cost.

A second sales accelerator is increasing reliability. With some exceptions (that have tended to fall by the wayside), smartphones have become more and more reliable. They start faster, have longer battery life, and need fewer resets. As such, they appeal to ordinary people in terms of speed, performance, and robustness.

A third sales accelerator is increasing stylishness. In the early days of smartphones, people would often say, “These smartphones look quite interesting, but they are a bit too big and bulky for my liking: frankly, they look and feel like a brick.” Over time, smartphones became smaller, lighter, and more stylish. In both their hardware and their software, they became more attractive and more desirable.

A fourth sales accelerator is increasing word of mouth recommendations. The following sets of people have all learned, from their own experience, good reasons why consumers should buy smartphones:

  • Industry analysts – who write reports that end up influencing a much wider network of people
  • Marketing professionals – who create compelling advertisements that appear on film, print, and web
  • Retail assistants – who are able to highlight attractive functionality in devices, at point of sale
  • Friends and acquaintances – who can be seen using various mobile services and applications, and who frequently sing the praises of specific devices.

This extra word of mouth exists, of course, because of a fifth sales accelerator – the increasing number of useful and/or entertaining mobile services that are available. This includes built-in services as well as downloadable add-on services. More and more individuals learn that mobile services exist which address specific problems they experience. This includes convenient mobile access to banking services, navigation, social networking, TV broadcasts, niche areas of news, corporate databases, Internet knowledgebases, tailored educational material, health diagnostics, and much, much more.

A sixth sales accelerator is increasing ecosystem maturity. The ecosystem is the interconnected network of companies, organisations, and individuals who create and improve the various mobile services and enabling technology. It takes time for this ecosystem to form and to learn how to operate effectively. However, in due course, it forms a pool of resources that is much larger than exists just within the first few companies who developed and used the underlying mobile platform. These additional resources provide, not just a greater numerical quantity of mobile software, but a greater variety of different innovative ideas. Some ecosystem members focus on providing lower cost components, others on providing components with higher quality and improved reliability, and yet others on revolutionary new functionality. Others again provide training, documentation, tools, testing, and so on.

In summary, smartphones are at the heart of a powerful virtuous cycle. Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle. Applications and services which prove their value as add-ons for one generation of smartphones become bundled into the next generation. With this extra built-in functionality, the next generation is intrinsically more attractive, and typically is cheaper too. Developers see an even larger market and increase their efforts to supply software for this market.

3.3 Factors restricting growth in the smartphone market

Decreasing price. Increasing reliability. Increasing stylishness. Increasing word of mouth recommendations. Increasingly useful mobile services. Increasing ecosystem maturity. What could stand in the way of these powerful accelerators?

Plenty.

First, there are technical problems with unexpected difficulty. Some problems turn out to be much harder than initially imagined. For example, consider speech recognition, in which a computer can understand spoken input. When Psion planned the Series 5 family of PDAs in the mid 1990s (as successors to the Series 3 family), we had a strong desire to include speech recognition capabilities in the device. Three “dictaphone style” buttons were positioned in a small unit on the outside of the case, so that the device could be used even when the case (a clamshell) was shut. Over-optimistically, we saw speech recognition as a potential great counter to the pen input mechanisms that were receiving lots of press attention at the time, on competing devices like the Apple Newton and the Palm Pilot. We spoke to a number of potential suppliers of voice recognition software, who assured us that suitably high-performing recognition was “just around the corner”. The next versions of their software, expected imminently, would impress us with its accuracy, they said. Alas, we eventually reached the conclusion that the performance was far too unreliable and would remain so for the foreseeable future – even if we went the extra mile on cost, and included the kind of expensive internal microphone that the suppliers recommended. We feared that “normal users” – the target audience for Psion PDAs – would be perplexed by the all-too-frequent inaccuracies in voice recognition. So we took the decision to remove that functionality. In retrospect, it was a good decision. Even ten years later, voice recognition functionality on smartphones generally fell short of user expectations.

Speech recognition is just one example of a deeply hard technical problem, that turned out to take much longer than expected to make real progress. Others include:

  • Avoiding smartphone batteries being drained too quickly, from all the processing that takes place on the smartphone
  • Enabling rapid search of all the content on a device, regardless of the application used to create that content
  • Devising a set of application programming interfaces which have the right balance between power-of-use and ease-of-use, and between openness and security.

Second, there are “chicken-and-egg” coordination problems – sometimes also known as “the prisoner’s dilemma”. New applications and services in a networked marketplace often depend on related changes being coordinated at several different points in the value chain. Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make less sense when viewed individually. For example, successful mobile phones required both networks and handsets. Successful smartphones required new data-enabled networks, new handsets, and new applications. And so on.

Above, I wrote about the potential for “a powerful virtuous cycle”:

Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle.

However, this only works once the various factors are all in place. A new ecosystem needs to be formed. This involves a considerable coordination problem: several different entities need to un-learn old customs, and adopt new ways of operating, appropriate to the new value chain. That can take a lot of time.

Worse – and this brings me to a third problem – many of the key players in a potential new ecosystem have conflicting business models. Perhaps the new ecosystem, once established, will operate with greater overall efficiency, delivering services to customers more reliably than before. However, wherever there are prospects of cost savings, there are companies who potentially lose out – companies who are benefiting from the present high prices. For example, network operators making healthy profits from standard voice services were (understandably) apprehensive about distractions or interference from low-profit data services running over their networks. They were also apprehensive about risks that applications running on their networks would:

  • Enable revenue bypass, with new services such as VoIP and email displacing, respectively, standard voice calls and text messaging
  • Saturate the network with spam
  • Cause unexpected usability problems on handsets, which the user would attribute to the network operator, entailing extra support costs for the operator.

The outcome of these risks of loss of revenue is that ecosystems might fail to form – or, having formed with a certain level of cooperation, might fail to attain deeper levels of cooperation. Vested interests get in the way of overall progress.

A fourth problem is platform fragmentation. The efforts of would-be innovators are spread across numerous different mobile platforms. Instead of a larger ecosystem all pulling in the same direction, the efforts are diffused, with the risk of confusing and misleading participants. Participants think they can re-apply skills and solutions from one mobile product in the context of another, but subtle and unexpected differences cause incompatibilities which can take a lot time to debug and identify. Instead of collaboration effectively turning 1+1 into 3, confusion turns 1+1 into 0.5.

A fifth problem is poor usability design. Even though a product is powerful, ordinary end users can’t work out how to operate it, or get the best experience from it. They feel alienated by it, and struggle to find their favourite functionality in amongst bewildering masses of layered menu options. A small minority of potential users, known as “technology enthusiasts”, are happy to use the product, despite these usability issues; but they are rare exceptions. As such, the product fails to “cross the chasm” (to use the language of Geoffrey Moore) to the mainstream majority of users.

The sixth problem underlies many of the previous ones: it’s the problem of accelerating complexity. Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos can ensue:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window.
  • Smartphone application development may grow in difficulty, as developers need to juggle different programming interfaces and optimisation methods.
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.

3.4 Factors that can overcome the restrictions and enable faster growth

Technical problems with unexpected difficulty. Chicken-and-egg coordination problems. Conflicting business models. Platform fragmentation. Poor usability design. Accelerating complexity. These are all factors that restrict smartphone progress. Without solving these problems, the latent potential of smartphone technology goes unfulfilled. What can be done about them?

At one level, the answer is: look at the companies who are achieving success with smartphones, despite these problems, and copy what they’re doing right. That’s a good starting point, although it risks being led astray by instances where companies have had a good portion of luck on their side, in addition to progress that they merited through their own deliberate actions. (You can’t jump from the observation that company C1 took action A and subsequently achieved market success, to the conclusion that company C2 should also take action A.) It also risks being led astray by instances where companies are temporarily experiencing significant media adulation, but only as a prelude to an unravelling of their market position. (You can’t jump from the observation that company C3 is currently a media darling, to the conclusion that a continuation of what it is currently doing will achieve ongoing product success.) With these caveats in mind, here is the advice that I offer.

The most important factor to overcome these growth restrictions is expertise – expertise in both design and implementation:

  • Expertise in envisioning and designing products that capture end-user attention and which are enjoyable to use again and again
  • Expertise in implementing an entire end-to-end product solution.

The necessary expertise (both design and implementation) spans eight broad areas:

  1. technology – such as blazing fast performance, network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security
  2. ecosystem design – to solve the “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised
  3. business models – identifying new ways in which groups of companies can profit from adopting new technology
  4. community management – encouraging diverse practitioners to see themselves as part of a larger whole, so that they are keen to contribute
  5. user experience – to ensure that the resulting products will be willingly accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts)
  6. agile project management – to avoid excess wasted investment in cases where project goals change part way through (as they inevitably do, due to the uncertain territory being navigated)
  7. lean thinking – including a bias towards practical simplicity, a profound distrust of unnecessary complexity, and a constant desire to identify and deal with bottleneck constraints
  8. system integration – the ability to pull everything together, in a way that honours the core product proposition, and which enables subsequent further evolution.

To be clear, I see these eight areas of expertise as important for all sectors of complex technology development – not just in the smartphone industry.

Expertise isn’t something that just exists in books. It manifests itself:

  • In individual people, whose knowledge spans different domains
  • In teams – where people can help and support each other, playing to everyone’s strengths
  • In tools and processes – which are the smart embodiment of previous generations of expertise, providing a good environment to work out the next generation of expertise.

In all three cases, the expertise needs to be actively nurtured and enhanced. Companies who under-estimate the extent of the expertise they need, or who try to get that expertise on the cheap – or who stifle that expertise under the constraints of mediocre management – are likely to miss out on the key opportunities provided by smartphone technology. (Just because it might appear that a company finds it easy to do various tasks, it does not follow that these tasks are intrinsically easy to carry out. True experts often make hard tasks look simple.)

But even with substantial expertise available and active, it remains essentially impossible to be sure about the timescales for major new product releases:

  • Novel technology problems can take an indeterminate amount of time to solve
  • Even if the underlying technology progresses quickly, the other factors required to create an end-to-end solution can fall foul of numerous unforeseen delays.

In case that sounds like a depressing conclusion, I’ll end this section with three brighter thoughts:

First, if predictability is particularly important for a project, you can increase your chances of your project hitting its schedule, by sticking to incremental evolutions of pre-existing solutions. That can take you a long way, even though you’ll reduce the chance of more dramatic breakthroughs.

Second, if you can afford it, you should consider running two projects in parallel – one that sticks to incremental evolution, and another that experiments with more disruptive technology. Then see how they both turn out.

Third, the relationship between “speed of technology progress” and “speed of product progress” is more complex than I’ve suggested. I’ve pointed out that the latter can lag the former, especially where there’s a shortage of expertise in fields such as ecosystem management and the creation of business models. However, sometimes the latter can move faster than the former. That occurs once the virtuous cycle is working well. In that case, the underlying technological progress might be exponential, whilst the productisation progress could become super-exponential.

3.5 Successes and shortcomings in predicting the future of technology

We all know that it’s a perilous task to predict the future of technology. The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars. These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary. Indeed, history is littered with curious and amusing examples of flawed predictions of the future. Popular science fiction fares no better:

  • The TV series “Lost in space”, which debuted in 1965, featured a manned spacecraft leaving Earth en route for a distant star, Alpha Centauri, on 16 October 1997.
  • Arthur C Clarke’s “2001: a space odyssey”, made in 1968, featured a manned spacecraft flight to Jupiter.
  • Philip K Dick’s novel “Do Androids Dream of Electric Sheep?”, coincidentally also first published in 1968, described a world set in 1992 in which androids (robots) are extremely hard to distinguish from humans. (Later editions of the novel changed the date to 2021 – the date adopted by the film Bladerunner which was based on the novel.)

Forecasts often go wrong when they spot a trend, and then extrapolate it. Projecting trends into the future is a dangerous game:

  • Skyscrapers rapidly increased in height in the early decades of the 20th century. But after the Empire State Building was completed in 1931, the rapid increases stopped.
  • Passenger aircraft rapidly increased in speed in the middle decades of the 20th century. But after Concorde, which flew its maiden flight in 1969, there have been no more increases.
  • Manned space exploration went at what might be called “rocket pace” from the jolt of Sputnik in 1957 up to the sets of footprints on the Moon in 1969-1972, but then came to an abrupt halt. At the time of writing, there are still no confirmed plans for a manned trip to Mars.

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects. Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on. Just because a technology improved in the past, it does not mean there will be sufficient societal motivation to keep on improving it in the future. Technology is not enough. Especially for changes that are complex and demanding, up to six additional criteria need to be satisfied as well:

  1. The technological development has to satisfy a strong human need.
  2. The development has to be possible at a sufficiently attractive price to individual end users.
  3. The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle.
  4. There must be a clear implementation path whereby the eventual version of the technology can be attained through a series of steps that are, individually, easier to achieve.
  5. When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be open (to accept new ideas).
  6. Likewise, the development process must be commercially attractive, or provide some other strong incentive, to encourage the generation of new ideas, and, even more important, to encourage people to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation.

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications. For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son. The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century. The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world! As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As mentioned earlier, an important factor that can allow accelerating technological progress is the establishment of an operational virtuous cycle that provides positive feedback. Here are four more examples:

  1. The first computers were designed on paper and built by hand. Later computers benefited from computer-aided design and computer-aided manufacture. Even later computers benefit from even better computer-aided design and manufacture…
  2. Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly – including more powerful tools…
  3. More powerful hardware enables new software which enables new use cases which demand more innovation in improving the hardware further…
  4. Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…

A well-functioning virtuous cycle makes it more likely that technological progress can continue. But the biggest factor determining whether a difficult piece of progress occurs is often the degree of society’s motivation towards that progress. Investment in ever-faster passenger airlines ceased, because people stopped perceiving that ever-faster airlines were that important. Manned flight to Mars was likewise deemed to be insufficiently important: that’s why it didn’t take place. The kinds of radical technological progress that I discuss in this book are, I believe, all feasible, provided sufficient public motivation is generated and displayed in support of that progress. This includes major enhancements in health, education, clean energy, artificial general intelligence, human autonomy, and human fulfilment. The powerful public motivation will cause society to prioritise developing and supporting the types of rich expertise that are needed to make this technological progress a reality.

3.6 Moore’s Law: A recap

When I started work at Psion, I was given a “green-screen” console terminal, connected to a PDP11 minicomputer running VAX VMS. That’s how I wrote my first pieces of software for Psion. A short while afterwards, we started using PCs. I remember that the first PC I used had a 20MB hard disk. I also remember being astonished to find that a colleague had a hard disk that was twice as large. What on earth does he do with all that disk space, I wondered. But before long, I had a new PC with a larger hard disk. And then, later, another new one. And so on, throughout my 20+ year career in Psion and Symbian. Each time a new PC arrived, I felt somewhat embarrassed at the apparent excess of computing power it provided – larger disk space, more RAM memory, faster CPU clock speed, etc. On leaving Symbian in October 2009, I bought a new laptop for myself, along with an external USB disk drive. That disk drive was two terabytes in size. For roughly the same amount of money (in real terms) that had purchased 20MB of disk memory in 1989, I could now buy a disk that was 100,000 times larger. That’s broadly equivalent to hard disks doubling in size every 15 months over that 20 year period.

This repeated doubling of performance, on a fairly regular schedule, is a hallmark of what is often called “Moore’s Law”, following a paper published in 1965 by Gordon Moore (subsequently one of the founders of Intel). It’s easy to find other examples of this exponential trend within the computing industry. University of London researcher Shane Legg has published a chart of the increasing power of the world’s fastest supercomputers, from 1960 to the present day, along with a plausible extension to 2020. This chart measures the “FLOPS” capability of each supercomputer – the number of floating point (maths) operations it can execute in a second. The values move all the way from kiloFLOPS through megaFLOPS, gigaFLOPS, teraFLOPS, and petaFLOPS, and point towards exaFLOPS by 2020. Over sixty years, the performance improves through twelve and a half orders of magnitude, which is more than 40 doublings. This time, the doubling period works out at around 17 months.

Radical futurist Ray Kurzweil often uses the following example:

When I was an MIT undergraduate in 1965, we all shared a computer that took up half a building and cost tens of millions of dollars. The computer in my pocket today [a smartphone] is a million times cheaper and a thousand times more powerful. That’s a billion-fold increase in the amount of computation per dollar since I was a student.

A billion-fold increase consists of 30 doublings – which, spread out over 44 years from 1965 to 2009, gives a doubling period of around 18 months. And to get the full picture of the progress, we should include one more observation alongside the million-fold price improvement and thousand-fold processing power improvement: the 2009 smartphone is about one hundred thousand times smaller than the 1965 mainframe.

These steady improvements in computer hardware, spread out over six decades so far, are remarkable, but they’re not the only example of this kind of long-term prodigious increase. Martin Cooper, who has a good claim to be considered the inventor of the mobile phone, has pointed out that the amount of information that can be transmitted over useful radio spectrum has roughly doubled every 30 months since 1897, when Guglielmo Marconi first patented the wireless telegraph:

The rate of improvement in use of the radio spectrum for personal communications has been essentially uniform for 104 years. Further, the cumulative improvement in the effectiveness of personal communications total spectrum utilization has been over a trillion times in the last 90 years, and a million times in the last 45 years

Smartphones have benefited mightily from both Moore’s Law and Cooper’s Law. Other industries can benefit in a similar way too, to the extent that their progress can be driven by semiconductor-powered information technology, rather than by older branches of technology. As I’ll review in later chapters, there are good reasons to believe that both medicine and energy are on the point of dramatic improvements along these lines. For example, the so-called Carlson curves (named after biologist Rob Carlson) track exponential decreases in the costs of both sequencing (reading) and synthesising (writing) base pairs of DNA. It cost about $10 to sequence a single base pair in 1990, but this had reduced to just 2 cents by 2003 (the date of the completion of the human genome project). That’s 9 doublings in just 13 years – making a doubling period of around 17 months.

Moore’s Law and Cooper’s Law are far from being mathematically exact. They should not be mistaken for laws of physics, akin to Newton’s Laws or Maxwell’s Laws. Instead, they are empirical observations, with lots of local deviations when progress temporarily goes either faster or slower than the overall average. Furthermore, scientists and researchers need to keep on investing lots of skill, across changing disciplines, to keep the progress occurring. The explanation given on the website of Martin Cooper’s company, ArrayComm, provides useful insight:

How was this improvement in the effectiveness of personal communication achieved? The technological approaches can be loosely categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum.

How much of the improvement can be attributed to each of these categories? Of the million times improvement in the last 45 years, roughly 25 times were the result of being able to use more spectrum, 5 times can be attributed to the ability to divide the radio spectrum into narrower slices — frequency division. Modulation techniques like FM, SSB, time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so. The remaining sixteen hundred times improvement was the result of confining the area used for individual conversations to smaller and smaller areas — what we call spectrum re-use…

Cooper suggests that his law can continue to hold until around 2050. Experts at Intel say they can foresee techniques to maintain Moore’s Law for at least another ten years – potentially longer. In assessing the wider implications of these laws, we need to consider three questions:

  1. How much technical runway is left in these laws?
  2. Can the benefits of these laws in principle be applied to transform other industries?
  3. Will wider system effects – as discussed earlier in this chapter – frustrate overall progress in these industries (despite the technical possibilities), or will they in due course even accelerate the underlying technical progress?

My answers to these questions:

  1. Plenty
  2. Definitely
  3. It depends on whether we can educate, motivate, and organise a sufficient critical mass of concerned citizens. The race is on!

>> Next chapter >>

14 May 2010

Chapter finished: Forces for positive change

Filed under: H+ Agenda — David Wood @ 1:26 am

You have to give up to go up

These words from John C Maxwell often come to my mind.  Recently, I’ve been trying to reduce various activities, in order so that I can dedicate more time to an activity I truly want to progress:

  • I’m trying to spend less time reading (for example, reading books)
  • I’m also trying to write fewer blogposts here
  • Instead, I’m prioritising writing material for my book “The Humanity+ Agenda”.

So, I have to apologise for lack of “normal service” in this blog.

However, I’ve now managed to complete a draft of chapter two of my book.  That’s good news.  You’ll find a snapshot of the current contents below.  At the same time, I’ve taken the decision that I ought to add one more chapter into the contents (it will be chapter 3, “My personal journey”).  So in a way I’m at the same situation as before: I still need to write 8 chapters.  (But I can count the draft as 2/10 finished, whereas before I was only 1/9 finished.)

The chapter I’ve just finished drafting, “Forces for positive change”, is meant to be self-explanatory, so I won’t say anything more about it here now.

I’ve also been making some changes to the first chapter (based, in part, on suggestions I’ve received from reviewers).  As I mentioned before, I’m keeping the latest drafts of all the chapters in the “Pages” section of this blog – accessible from the box on the right hand side.

I’ll be grateful for feedback.  I may not act on that feedback immediately, but I’ll get round to it in due course!

========

2. Forces for positive change

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

How do people respond to mentions of possible global crises?

In my experience, people often find that kind of discussion awkward and embarrassing.  They make a joke, or cough nervously, and try to change to a different topic.

One rationale for avoiding talking about an issue is if there’s nothing that can be done about that issue.  After all, what’s the point of discussing a problem if you can’t change the outcome?  There’s no merit in becoming unnecessarily agitated.  Better to focus on matters where you can change the outcome.  It’s as stated in the “acceptance clause” of the “serenity prayer” saying of Reinhold Niebuhr:

God grant me
The serenity to accept the things I cannot change…

That doesn’t apply in the case of the crises listed in the previous chapter.  For example, there are plenty of steps that people can take to reduce the risk of environmental catastrophe.  But these steps seem hard.  And this triggers a second, more complex, rationalisation:

  • If the only responses to a perceived crisis are hard, it’s preferable to hope that the crisis isn’t real, or will go away of its own accord.
  • Alternatively, if the perceived crisis turns out to be real after all, it’s preferable to hope that it won’t have any impact in the foreseeable future.
  • In any case, it’s preferable to leave it to other people to worry about the crisis.

Here’s the sense in which this rationalisation is “preferable”: it’s psychologically easier.  It allows people to go on living their lives as normal, concentrating on matters of work and play, family and friends, sport and culture.  They deny that there’s anything significant they personally should be doing about the looming crisis.  Therefore, they’re able to focus without distraction on other matters that are important to them.

This “denialist” approach can gather rational-sounding arguments in its defence.  Part of the psychological comfort blanket is the observation that “we’ve been getting along fine, without things going badly wrong in the past, thank you – despite the warnings of previous doom-mongers”.  One antidote to this is to highlight past occasions when things did go badly wrong, despite protestations of optimism from people who had become overly accustomed to a lengthy period of apparent calm and progress.  The outbreak of World War One is a stark example.  As noted by journalist Hamish McRae:

The 19th century globalisation ended with the catastrophe of the First World War. It is really scary to realise how unaware people were of the fragility of those times. In 1910, the British journalist Norman Angell published The Great Illusion, in which he argued that war between the great powers had become an economic impossibility because of “the delicate interdependence of international finance”.

In spring 1914 an international commission reported on the Balkan Wars of 1912-13. The British member of the commission, Henry Noel Brailsford, wrote: “In Europe the epoch of conquest is over and save in the Balkans perhaps on the fringes of the Austrian and Russian empires, it is as certain as anything in politics that the frontiers of our national states are finally drawn. My own belief is that there will be no more war among the six powers.”

A different kind of response to a potential pending crisis is for people to latch on, firmly, to one apparent solution to that crisis.  You know the kind of thing:

  • To prevent the risk of runaway global warming, we must, above all, reduce carbon emissions.
  • To prevent the risk of economic destabilization, we must, above all, constrain the greediness of bankers.
  • To prevent the risk of terrorists detonating weapons of mass destruction, we must, above all, increase surveillance of potential trouble-makers.

This is in line with the “action clause” of Niebuhr’s saying:

God grant me
The serenity to accept the things I cannot change
The courage to change the things that I can

Yes, if a diagnosis is correct, it can be a good thing to focus single-mindedly on what needs to be done to fix matters.  But if the diagnosis is incomplete, or flawed in other ways, this kind of single-track solution-thinking can obstruct a fuller discussion and even make matters worse.  The passion of misguided courage or premature activism can  pose just as many problems as does a denialist desire to damp down the conversation altogether.  I’m a big fan of passion, but passion without wisdom can often be part of the problem, rather than part of the solution.  That’s why the final clause of Niebuhr’s saying is the most important one:

God grant me
The serenity to accept the things I cannot change
The courage to change the things that I can
And the wisdom to know the difference.

In a later chapter, I’ll review what I see as “Six dangerous temptations” which are, each in their own way, mistaken single-track approaches to the impending crises of the 2010s.  But before that, it’s time for me to describe the approach that I view as much more promising.

In some ways, the approach I’ll outline is single-minded too.  But please bear with me while I spell out the whole story.

2.1 Above all, technology

The single biggest force for positive change is technology.

Some examples:

  • Mechanisation frees labourers from the drudgery of tedious physical exertion.
  • New forms of transport allow people to travel and explore, further and faster.
  • Modern buildings provide unparalleled safety, shelter and amenities.
  • Medicine intervenes to cure miserable diseases and prevent early death.
  • Improved agriculture brings forth food to eliminate famine and nurture health.
  • Information technology keeps people informed of key developments.

In prehistory, it was first fire, then agriculture, then the wheel, that set humankind on the road to civilisation.  In more recent times, the printing press enabled both the Reformation and the Enlightenment.  Reliable clocks allowed accurate tracking of the positions of ships at sea, diminishing the risk of shipwrecks.  The industrial revolution triggered a series of changes that enabled unprecedented degrees and variety of leisure time.  The Internet, coupled with the spread of both computing and mobile communications, transforms virtually every area of life in almost every corner of the globe, regardless of the type of government.

How could technology address the crises listed in the previous chapter?  In principle, the answers are as follows.

  1. There’s more than enough energy reaching the earth from the sun to answer every human energy need, allowing the discontinuation of CO2-emitting fossil fuels.  All that’s needed is the improvements in technology to harvest that solar energy, store it, and transport it to where it’s needed.  Alongside solar energy, alternative energy technologies could play a role too – such as next generation nuclear fission reactors (to name just one of many possibilities).  Moreover, improved technology can be used to generate and transport ample fresh water from sea water, and to cleanly manufacture quantities of whatever resources are needed.
  2. Improved software systems could monitor and regulate the flow of financial resources around the world, to prevent economic destabilization or financial collapse.
  3. Rapidly improving living standards, enabled by smart use of technology, visibly accessible to everyone on a fair basis, will take away much of the incentive people feel towards revolution or terrorism.  Improved detection systems – akin to improved anti-virus systems in the software world, that limit the potential of damage from virus writers – will also play a part.
  4. The same sorts of improvement will allow more and more people to experience higher peaks of human fulfillment.

But these answers are unpopular.  Here are some key objections to what I’ve just proposed:

  • Technological progress is often slow and uncertain.  For example, progress with solar energy or safe nuclear fission has long been predicted, but often has been delayed.  We can’t afford the risk of waiting for technological improvements; we need to adopt other kinds of solution instead.
  • In any case, technology doesn’t address the underlying causes of human problems, such as faulty human nature, or dysfunctional social structures.
  • Worse, technology brings problems as well as solutions.  (Refer back to section 1.5, “The existential crisis of accelerating change and deepening complexity”.)

In turn, here are my responses to these objections:

  • We need to invest substantially more in technological development – but do it cleverly.  Happily, the rate of technological progress has itself been accelerating.
  • I disagree that technology only addresses “external” aspects of human life.  Technology, wisely applied, can play a big role in improving both human nature and human social structure.
  • To ensure that we obtain positive results from technological progress, rather than negative ones, we need improved monitoring and management of the development and deployment of technology.

2.2 Above all, education

My deeper answer is that the set of solutions I’m advocating is pro-human even more than it is pro-technology.  Technology provides the means, but it’s not an end in itself.

I’m not interested in technology for the “better gadgets” it can provide us.  I’m interested in technology for the “better humans” it can help us to become.

That brings me to the topic of education: life-long learning, in which we continually improve all aspects of our knowledge and intelligence – including our social intelligence and emotional intelligence as well as our analytic intelligence.

There’s a crtically important two-way relationship between technology and education.

In one direction: clearer, smarter thinking, freed by good education from prejudice and misinformation, allows us to make better decisions about improving technology.  It gives us the “wisdom to know the difference”.

In the other direction: improved technology provides vital tools to assist our thinking and allow us to learn more quickly.  These tools include:

  • Widely accessible, easily searchable electronic libraries of the best thinking of the entire planet
  • Calculation engines that can swiftly compute the range of possible results of complex interactions
  • Healthy food, dietary supplements, medication, and stimulants, that allow people to concentrate more fully, while acquiring or using knowledge
  • Targeted “electronic learning” devices, increasingly hosted within rich virtual emulation environments, that allow individuals to quickly and enjoyably acquire specific knowledge and skills
  • Smart “personal digital assistants” that help debug our thinking and guide us through complex tasks
  • An online “cloud” of software services, containing both human and artificial intelligence, that can augment our own mental processes.

The “Internet of Computers” is in the process of transformation to an “Internet of Things” which connects literally trillions of data sensors around the planet.  The result, in principle, is up-to-the-second information about every parameter of possible interest: crops, soil, water purity, atmospheric composition, temperature, underground vibrations…

The output of education is an improvement in individual human minds and improvement in the global human understanding.  Yet education faces some steep challenges in the 2010s:

  • Merely the ability to think faster does not mean that we think better.
  • If we are victim to fundamental biases, we’ll use our greater intelligence to find clever justifications for continuing our biases, rather than to see more clearly.
  • If a society is victim to outmoded but persistent misunderstanding, the same “bias preservation” dynamics operate at a larger level.
  • Various vested interests are better organised than ever before – and have a battery of mind-turning tools at their disposal.

Psychologists have identified and catalogued large numbers of widespread but flawed modes of thinking – with names such as “confirmation bias”, “sunk cost fallacy”, “illusory correlation”, and “conjunction fallacy”.  Alas, merely knowing about these flaws does not mean we are personally immune to them.

However, a credible positive message about the future can give people the courage to combat the cognitive shackles of intellectually repressive worldviews.  Educating and exciting people about the radical transformative capabilities of emerging new technologies – including clean energy, robotics, nanotechnology, synthetic biology, artificial general intelligence, and human rejuvenation engineering – can disrupt powerful entrenched cultural biases, such as those derived from fundamentalist religions or ideologies.  Humanity+, as I see it, isn’t just an appeal for better technology; it’s an appeal for a better way of thinking about technology.  This new thinking highlights the potential for improved technology, wisely deployed, to provide the kinds of solution that were at one time the sole province of religion:

  • Mental tranquility and exhilaration
  • Harmony between peoples and between nations
  • An abundance of resources, without scarcity
  • Lifespans that can in principle be extended indefinitely, containing ever greater variety and fulfillment.

To be clear, this education must clarify the perils as well as the promise of new technology, to allow us to choose wisely the courses of action that will fulfil the promise and avoid the perils.  That choice will be far from easy.  We’ll need all the help we can get.  We’ll need great strength, as well as great wisdom. Some of that strength will come from improved social structures.  And some will come from healthier, more vibrant individuals.

2.3 Above all, health

Education and health are like two sides of the same coin:

  • Education improves the mind, making us wiser (in all the many dimensions of the word “wise”)
  • Health improves the body, making us stronger (in all the many dimensions of the word “strong”).

Just as there’s a two-way relationship between technology and education, there’s a two-way relationship between technology and health.

In one direction: individuals who are healthier are able to work harder, are less prone to mistakes from tiredness or other psychological vulnerabilities, and contribute more to the growth of high quality technology.

In the other direction: improved technology provides numerous tools to improve our health:

  • To repair broken limbs or joints
  • To improve our immune systems
  • To address all kinds of diseases
  • To keep us fitter and more resilient, longer into our lifespans
  • To screen, proactively, for early warning signs of impending bodily failures.

Miniaturisation of medical tools has profound impacts on the effectiveness of many surgical processes.  The evolution from “smart phones” to “smart things” is increasingly extending to “smart cells”.  Techniques from industrial manufacturing and software programming are being re-applied at the level of items that can easily pass through our bloodstreams and other internal fluid systems.  Rather than “programming silicon” we can look forward to increasing applications of “programming carbon”.

Technology not only has the capability of restoring us to health.  It can take us beyond normal levels of health, to a state where we are “better than well”.  It can do this, first, by making changes outside our bodies – providing us with machinery to magnify our strength, telescopes and microscopes to magnify our vision, loudspeakers to magnify our voice, and so on.  But it can do this, second, by making changes inside our bodies.  These changes are more fundamental and, therefore, tend to raise more apprehension.  Many people feel these changes are “unnatural” and, therefore, should be opposed.  But arguments about things being unnatural hold little weight for me.

There’s a saying attributed to the father of Orville and Wilbur Wright:

If God wanted man to fly, he would have made us with wings.

But who among us has refused to enter an airplane on the grounds that it would be “unnatural”?  Indeed, who among us has refused to augment the natural protective and decorative aspects of our skin by covering much of that skin with clothing (something else that is “unnatural”)?  Injecting young babies with vaccines is, again, far from natural.  And what about cosmetic surgery?  Not so many years ago, we might have said “yuk” at the prospect of someone having cosmetic surgery.  Nowadays, it’s no big deal.

Imagine life in 20 or 30 years time, when people might routinely undergo hi-tech medical treatment that repairs, not just aspects of their skin and external appearance, but also lots of internal bodily damage – damage that you and I would describe as “aging”.  Imagine that, as a result of such treatment, someone’s life expectancy is increased by 10 years, and imagine that the treatment can be repeated on a regular basis.  Imagine that this opens the possibility for people routinely living far beyond the current maximum age of 120.  Should we object to such treatment on the grounds that it is “unnatural”?  My own expectation is that we will, very quickly, become accustomed to such treatment, and we’ll no longer bat an eyelid at it.

It will be like test tube babies.  The birth of the very first test tube baby, in 1978, was accompanied by a fervour of hand-wringing and amazement.  Nowadays, most people take the whole process for granted (whilst recognising that it’s made possible by very clever technology).  Likewise, as treatments become available that make our bodies stronger and fitter, and which repair the cellular and inter-cellular damage known as aging, there will – to start with – be a huge press hullabaloo.  But the hullabaloo will subside.

There are important questions over the desirability of people being “better than well” and having indefinitely long lifespans, and I’ll return to these in later chapters.  However, as you can see, I have little respect for attempts to reject these treatments just because they are somehow “unnatural”.

As more people come to understand the potential for greatly enhanced health within the lifetimes of many people living today, there will be a profound shift in attitude.  The shift is well captured in the amusing but profound fable written by Nick Bostrom, “The Fable of the Dragon-Tyrant“.  I expect that, in the next ten years, larger numbers of people will switch from what might be called a “pro-death” accommodationist viewpoint, which tolerates the fact of human aging and human death (albeit half-heartedly complaining about it), to an aggressive “pro-life” stance, that urgently seeks to accelerate research into the technology and treatment that can slow, then reverse the effects of human aging.  This pro-life stance will vigorously campaign for a much larger portion of national budgets to be allocated to research and development of pro-life technologies.  Instead of spending lots of time learning about, for example, sports team statistics, or the habits of the latest pop stars and movie actors, people everywhere will be exchanging information and ideas about pro-life technologies and treatments.

Is there a risk that a rush of interest in pro-life technologies will be at the cost of interest in other important technologies, such as those for clean energy?  Perhaps.  But probably not:

  • People who think they’re going to live longer will, other things being equal, become more interested in longer-term planetary well-being, rather than leaving that topic as something for subsequent generations to handle.
  • Technologies are frequently inter-dependent.  There’s been a very useful “convergence” of cutting edge ideas from software, nanotechnology, synthetic biology, and artificial intelligence, among others.  Many of the techniques that accelerate pro-life technologies will, at the same time, accelerate clean energy technologies.
  • Someone who becomes knowledgeable about science and technology due to an interest in life extension will tend also to discover lots of fascinating wider applications of science and technology.

This is where a renewed education programme, as briefly mentioned earlier, should have a big effect.  The result should be a critical mass of informed citizens who have a shared broad understanding of the power and potential of science and technology to provide deep solutions to the major problems facing human society.  Leaders of society will be unable to ignore this critical mass.

2.4 Above all, society

Recall one of the objections I mentioned earlier:

  • Technological progress is often slow and uncertain.  For example, progress with solar energy or safe nuclear fission has long been predicted, but often has been delayed.  We can’t afford the risk of waiting for technological improvements; we need to adopt other kinds of solution instead.

This objection can be re-stated, forcibly, against the specific pro-technology picture I’m painting.  The objection runs as follows:

  • Technological progress is often slow and uncertain.  The idea of technology making people significantly smarter (via improved education) and significantly healthier (via improved medicine) is fair enough in the long term, but don’t expect any big changes in the next 10-20 years.  We should be focusing, instead, on other kinds of solution to the problems of the early 21st century.  For example, we should be seeking changes in politics, and/or in the values that motivate human lifestyles.

As it happens, I agree with much of this objection:

  • I agree that technological progress is often slow and uncertain.  However, we can and should take steps to make it faster, and less uncertain.
  • I agree that, in parallel with a focus on improved technology, we must also address the organisation of society, via changes that only politicians can authorise.
  • Again, I agree that we must also address the question of the values that motivate human lifestyles.

In the section after this one, I’ll revisit the topic of motivational values, under the heading “Above all, humanity”.  For the moment, let’s briefly look at the question of the relationship between technology and society.

By now, you won’t be surprised if I say that there’s a two-way relationship between society and technology.  In one direction, smart changes in legislation and social structure can have a big impact on the speed and effectiveness of research, development, and deployment of new technologies.  And in the other direction, wise use of new technologies – such as the Internet – can boost the effectiveness of social structures.

Technology is developed by people and companies who are driven by various motivations, and constrained by various fears.  The motivations cover both economic desires and non-economic desires – including the “reputation economy” and the “gift economy”.  Changing the incentive structures can have a significant impact on the work performed.  However, the impact is sometimes different from what prevailing wisdom would suggest.  For example, over-emphasis of financial rewards can, in some cases, diminish the potential for people to uncover creative solutions, or to work together collaboratively.

When it comes to governing the effectiveness of technology development, the negative constraints can be even more important than the positive incentives.  If a company fears bad outcomes from the result of some research, they’ll be less likely to undertake that research.  These bad outcomes can include:

  • Being sued for huge amounts of money for patent infringement by someone who, it seems, has thought of a broadly similar idea
  • Undermining an existing profitable product line by the same company.

The system of patents grew up for good reasons, but operates at the present time in a way that frequently hinders innovation and collaboration, rather than rewards it.  This system is overdue significant reform.  Equally pressing is the need for governments to ensure the continuing vitality of their economies, striking a good balance between the requirement for competition and the requirement for collaboration.  There also need to be checks on the ability of sophisticated, well-funded lobbyists to gain undue influence over the decisions of law-makers: democracy is far from healthy when vested interests have so much power.  Finally, societies need to be constantly alert against the twin risks of under-regulation and over-regulation, in numerous areas of potential innovation, including new drugs, new mobile computing devices, new “cloud” services, new financial services, and so on.

None of this “social engineering” is easy, but it all makes a big difference to the likelihood of rapid progress with the development of key technology.  With bad social structures, we get “the madness of the crowd”.  With good social structures, we get “the wisdom of the crowd”.

2.5 Above all, humanity

So far, I’ve described four key themes, as priorities for the coming decade: technology, education, health, and society.  There’s one more to add, which sits at the top of the whole structure: humanity.  The diagram provides a reminder of the numerous two-way interconnections between these five key themes.

The whole point of all the effort on technology, all the long hours spent on education and learning, all the labour to improve our health, and all our social engineering, is to enhance human experience, in ways that are fully sustainable, open-ended, and equitable.

You may ask: what kind of human experience? My answer is: we can presently only begin to glimpse the possibilities.

Some hints are provided by our present-day peak experiences – from music, dance, sport, games, puzzles, theatre, reading, mathematics, discovery, exploration, meditation, friendship, family, community, gardening, pets, safari, food, drink…

You may ask: won’t this become boring? My answer is: why should it? There’s a whole universe in physical space for us to explore.  And there are countless exotic universes in virtual space for us to create and explore.  There will be numerous interesting people to get to know – not to mention huge numbers of fascinating artificial intelligences.

Humanity sits at the top of this structure, not only as the end goal, but as a way to decide, in principle, the value of activity throughout society.  At present, countries tend to measure their worth via their GNP – Gross National Product – or some related economic statistic.  Leaders are happy when their GNP increases, and perturbed when it falls.  But it is widely recognised that GNP is a sorely inadequate measure.  Stirring words from a fine March 1968 speech by US senator Robert Kennedy are worth quoting at some length:

Even if we act to erase material poverty, there is another greater task, it is to confront the poverty of satisfaction – purpose and dignity – that afflicts us all.  Too much and for too long, we seemed to have surrendered personal excellence and community values in the mere accumulation of material things.  Our Gross National Product, now, is over $800 billion dollars a year, but that Gross National Product – if we judge the United States of America by that – that Gross National Product counts air pollution and cigarette advertising, and ambulances to clear our highways of carnage.  It counts special locks for our doors and the jails for the people who break them.  It counts the destruction of the redwood and the loss of our natural wonder in chaotic sprawl.  It counts napalm and counts nuclear warheads and armored cars for the police to fight the riots in our cities.  It counts Whitman’s rifle and Speck’s knife, and the television programs which glorify violence in order to sell toys to our children.  Yet the gross national product does not allow for the health of our children, the quality of their education or the joy of their play.  It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials.  It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country, it measures everything in short, except that which makes life worthwhile.

However, so long as (to quote a saying attributed to Milton Friedman) “the business of business is business”, it’s hard to expect companies to set aside the quest for economic profits in favour of these broader human values.  Shareholders will act in concert to fire management teams who fail to exploit profit opportunities.  The remedy is as already stated: a critical mass of informed citizens who have a shared broad understanding of the power and potential of science and technology to provide deep solutions to the major problems facing human society.  This mass of informed citizens can ensure that companies act in service of goals other than mere monetary reward.

Alas, everything I’ve spoken about in this chapter appears to come with a large price tag.  Education and health already consume major portions of national budgets.  Science research budgets are under great pressure, too.  In a time of austerity, it’s likely that expenditure on all these areas will fall, rather than rise.  Yet I have been arguing for an increase, in each of these areas.  In later chapters of this book, I’ll expand these five priority areas into 20 specific research projects.  I’ll make the case that these 20 projects should become priorities for all of us – as individuals, organisations, institutions, universities, industry, governments, and media.  They all deserve a larger amount of attention, analysis, resourcing, and funding.  Given our current severe economic constraints, that may seem a fanciful hope.

But there is good reason for this hope.  We have in our favour the fact that improvements in any one of these areas feeds into improvements elsewhere.  As our technology improves, our education improves, and so on.  Rather than simply focusing on spending more money on each of these areas, we can focus on spending money more smartly on each area.  Raising our game – in ways that I’ll explain – means that instead of “Education”, we can talk of “Education+”.  Instead of “Health”, we can speak of “Health+”.  Instead of “Technology”, we can speak of “Technology+”.  Instead of “Society”, we can speak of “Society+”.  And – you guessed it – instead of “Humanity”, we can speak of “Humanity+”.

At this stage, you will probably still have a large question in your mind: can we really expect the kinds of technological progress that I’ve been indicating as possible for the next 10-20 years? I’ll now seek to address that question in two different ways:

  • In the next chapter, I’ll describe my own personal history in the technology industry, highlighting lessons about both rapid and slow rates of progress.
  • After that, I’ll devote a chapter to each of the five key themes, highlighting each time actual progress that’s happening.

>> Next chapter >>

9 May 2010

Chapter completed: Crises and opportunities

Filed under: alienation, change, climate change, Economics, H+ Agenda, recession, risks, terrorism — David Wood @ 12:16 am

I’ve taken the plunge.  I’ve started writing another book, and I’ve finished the first complete draft of the first chapter.

The title I have in mind for the book is:

The Humanity+ Agenda: the vital priorities for the coming decade

The book is an extended version of the 10 minute opening presentation I gave a couple of weeks ago, at the Humanity+ UK 2010 event.  My reasons for writing this book are spelt out here.  The book will re-use and refine a lot of the material I’ve tried out from time to time in earlier posts on this blog, so you may find parts of it familiar.

I’ve had a few false starts, but I’m now happy with both the framework for the book (9 chapters in all) and a planned editing/review process.

Chapter 1 is called “Crises and opportunities”.  There’s a copy of the current draft below.

I’ll keep the latest drafts of all the chapters in the “Pages” section of this blog – accessible from the box on the right hand side.  From time to time – as in this posting – I’ll copy snapshots of the latest material into regular blogposts.

It’s my hope that the book will benefit from feedback and suggestions from readers.  Comments can be made, either to regular blogposts, or to the “pages”.  I’m also open to receiving emailed comments or contributions.  Unless someone tells me otherwise, I’ll assume that anything posted in response is intended as a potential contribution to the book.

(I’ll acknowledge, in the acknowledgements section of the book, all contributions that I use.)

========

1. Crises and opportunities

<Snapshot of material whose master copy is kept here>

The decade 2010-2019 will be a decade of crises for humanity:

  • As hundreds of millions of people worldwide significantly change their lifestyles, consuming ever more energy and generating ever more waste, the planet Earth faces increasingly great strains. “More of the same” is not an acceptable response.
  • Alongside the risk of environmental disaster, another risks looms: that of economic meltdown. The massive shocks to the global finance system at the end of the previous decade bear witness to powerful underlying tensions and problems with the operation of market economies.
  • The rapid rate of change causes widespread personal frustration and societal angst, driving a significant minority of people into the arms of beguiling ideologies such as fundamentalist Islam and the militant pursuit of terrorism. Relatively easy access to potential weapons of mass destruction – whether nuclear, biological, or chemical – transforms the threat of terrorism from an issue of national security into an issue of global survival.

In aggregation, these threats are truly fearsome.

To improve humanity’s chances of surviving, in good shape, to 2020 and beyond, we need new solutions.

I believe that these new solutions are emerging in part from improved technology, and in part from an important change in attitude towards technology. This book explains the basis for these beliefs.  This chapter summarises the crises, and the remaining chapters summarise the proposed solutions.

In the phrase “Humanity+”, the plus sign after the word “Humanity” emphasises that solutions to our present situation cannot be achieved by people continuing to do the same as before. Instead, a credible vision of wise application of new technologies can bring humans – both individually and collectively – to operate in dramatically enhanced ways:

  • Humans will be able, in stages, to break further free from the crippling constraints and debilitations of our evolutionary background and our historical experiences;
  • We will, individually and collectively, become smarter, wiser, stronger, kinder, healthier, calmer, brighter, more peaceful, and more fulfilled;
  • Instead of fruitless divisions and conflicts, we’ll find much better ways to cooperate, and build social systems for mutual benefit.

This is the vision of humanity fulfilling its true potential.

But there are many obstacles on the path to this fulfilment.  These obstacles could easily drive Humanity to “Humanity-” (humanity minus), or even worse (human annihilation), rather than Humanity+.  There’s nothing inevitable about the outcome.  As a reminder of the scale of the obstacles, let’s briefly review five interrelated pending crises.

1.1 The environmental crisis

Potential shortages of clean drinking water.  Rapid reductions in the available stocks of buried energy sources, such as coal, gas, and oil.  Crippling impacts on our environment from the waste products of our lifestyles.  These – and more – represent the oncoming environmental crisis.

With good reason, the aspect of the environmental crisis that is most widely discussed is the potential threat of runaway climate change.  Our accelerating usage of fossil fuels means that carbon dioxide (CO2) in the atmosphere has reached levels unprecedented in human history.  This magnifies the greenhouse effect of the atmosphere, tending to push the average global temperature higher.  This relationship is complex.  Forget simple ideas about increases in factor A invariably being the cause of increases in factor B.  Think instead about a dance of different factors that each influence the other, in different ways at different times.  (That’s a theme that you’ll notice throughout this book.)

In the case of climate change, the players in the dance include:

  • Variation in the amount of sunlight striking earth landmasses, due to changes over geological timescales in the axis of the earth, the eccentricity of the earth’s orbit, and the distribution of landmass over different latitudes;
  • Variation in the slow-paced transfer of heat between different parts of the ocean;
  • Variation in the speed of build-up or collapse of huge polar ice sheets;
  • Variation in numerous items in the atmosphere, including aerosols (which tend to lower average temperature) and greenhouse gases (which tend to raise it again);
  • Variation in the amounts of greenhouse gases, such as methane, being suddenly released into the atmosphere from buried frozen stores (for example, from tundra);
  • Variation in the sensitivity of the planet to the various “climate forcing agents” – sometimes a small change in one will lead to just small changes in the climate, but at other times the consequences are more severe.

What makes this dance potentially deadly is the twin risk of latent momentum and strong positive feedback:

  • More CO2 in the atmosphere raises the average temperature, which means there’s more H2O (water vapour) in the atmosphere too, raising the average temperature yet further;
  • Icesheets over the Antarctic and Greenland take a long time to start to disintegrate, but once the process gets under way, it can become essentially irreversible;
  • Less ice on the planet means less incoming sunlight is reflected to space; instead, larger areas of water absorb more of the sunlight, increasing ocean temperature further;
  • Rises in sea temperatures can trigger the sudden release of huge amounts of greenhouse gases from methane clathrate compounds buried in seabeds and permafrost – another example of rapid positive feedback.

Indeed, there is significant evidence that runaway methane clathrate breakdown may have caused drastic alteration of the ocean environment and the atmosphere of earth a number of times in the past, most notably in connection with the Permian extinction event, when 96% of all marine species became extinct about 250 million years ago.

Of course, predicting the future of the environment is hard.  There are three sorts of fogs of climate change uncertainty:

  1. Many of the technical interactions are still unknown, or are far from being fully understood.  We are continuing to learn more;
  2. Even where we believe we do understand the technical interactions, many of the detailed interactions are unpredictable.  Just as it’s hard to predict the weather itself, one month (say) into the future, it’s hard to predict the exact effect of ongoing climate forcing agents.  The effect that “a butterfly flapping its wings unpredictably causes a hurricane on the other side of the planet” applies for the chaos of climate as much as for the chaos of weather;
  3. There are huge numbers of vested interests, who (consciously or sub-consciously) twist and distort aspects of the argument over climate change.

The vested interests include:

  • Both anti-nuclear and pro-nuclear campaigners;
  • Both anti-oil and pro-oil campaigners, and anti-coal and pro-coal campaigners;
  • Both “small is beautiful” and “big is beautiful” campaigners;
  • Both “back to nature” and “pro-technology” campaigners;
  • Scientists and authors who have long supported particular theories, and who are loath to change their viewpoints;
  • Hardened political campaigners who look to extract maximum concessions, for the region or country they represent, before agreeing a point of negotiation.

Not only is it psychologically hard for individuals to objectively review data or theories that conflicts with their favoured opinions.  It is economically hard for companies (such as energy companies) to accept viewpoints that, if true, would cause major hurdles for their current lines of business, and significant loss of jobs.  On the other hand, just because researcher R has strong psychological reason P and/or strong economic incentive E in favour of advocating viewpoint V, it does not mean that viewpoint V is wrong.  The viewpoint could be correct, even though some of the support advanced in its favour is non-logical.  As I said, there’s lots of fog to navigate!

Despite all this uncertainty, I offer the following conclusions:

  • There is a wide range of possible outcomes, for the climate in the next few decades;
  • The probability of runaway global warming – with disastrous effects on sea levels, drought, agriculture, storms, species and ecosystem displacement, travel, business, and so on – is at least 20%, and likely higher;
  • Global warming won’t just make the temperature higher; it will make the weather more extreme – due to increased global temperature gradients, increased atmospheric water vapour, and higher sea temperatures that stir up more vicious storms.

A risk of at least 20% of a global environmental disaster deserves urgent attention and further analysis.  Who among us would enter an airplane with family and friends, if we believed there was a 20% probability of that airplane plummeting headlong out of the sky to the ground?

1.2 The economic crisis

The controversies and uncertainties over the potential threat of runaway climate change find parallels in discussions over a possible catastrophic implosion of the world economic system.  These discussions likewise abound with technical disagrements and vested interests.

Are governments, legislators, banks, and markets generally wise enough and capable to oversee the pressures of financial trading, and keep the systems afloat?  Was the recent series of domino-like collapses of famous banks around the world a “once in a lifetime” abnormality, that is most unlikely to repeat?  Or should we expect a recurrence of fundamental financial instability?  What is the risk of a larger financial crisis striking?  Indeed, what is the risk of adverse follow-on effects from the “tail end” of the 2008-2009 crisis, generating a so-called “double dip” in which the second dip is more drastic than the first?  On all these questions, opinions vary widely.

Despite the wide variation in opinions, some elements seem common.  All commentators are fearful of some potential causes of major disruption to global economics.  Depending on the commentator, these perceived potential causes include:

  • Clumsy regulation of financial markets;
  • Bankers who are able to take catastrophic risks in the pursuit of ever greater financial rewards;
  • The emergence of enormous monopoly powers that eliminate the benefits of marketplace competition;
  • Institutions that become “too big to fail” and therefore derail the appropriate workings of the market system;
  • Sky-high accumulation of debts, with individuals and countries living far beyond their means, for too long;
  • Austerity programmes that attempt to reduce debts quickly, but which could provoke spiraling industrial disputes and crippling strikes;
  • Bubbles that grow because “it’s temporarily rational for everyone to be irrational in their expectations” and then burst with tremendous damage.

We must avoid a feeling of overconfidence arising from the fact that previous financial crises were, in the end, survived, without the world of banking coming to an end.  First, these previous financial crises caused numerous local calamities – and the causes of major wars can be traced (in part) to these crises.  Second, there are reasons why future financial problems could have more drastic effects than previous ones:

  • There are numerous hidden interconnections between different parts of the global  economy, which accelerate negative feedback when individual parts fail;
  • The complexity of new financial products far outstrips the ability of senior managers and regulators to understand and appreciate the risks involved;
  • In an age of instant electronic connections, the speed of cascading events can catch us all flat-footed.

For these reasons, I tentatively suggest we assign a ballpark risk factor of about 20% to the probability of a major global financial meltdown during the 2010s.  (Yes, this is the same numeric figure as I picked for the environmental crisis too.)

Note some parallels between the two crises I’ve already discussed:

  • In each case, the devil is in the mix of weakly-understood powerful feedback systems;
  • Again in each case, our ability to discern what’s really happening is clouded by powerful non-rational factors and vested interests;
  • Again in each case, the probabilities of major disaster cannot be calculated in any precise way, but the risk appears large enough to warrant very serious investigation of solutions;
  • Again in each case, there is deep disagreement about the best solutions to deploy.

Worse, these two looming crises are themselves interconnected.  Shortage of resources such as clean energy could trigger large price hikes which throw national economies into tailspins.  Countries or regions which formerly cooperated could end up at devastating loggerheads, if an “abundance spirit” is replaced by a “scarcity spirit”.

1.3 The extreme terrorist crisis

What drives people to use bombs to inflict serious damage?  Depending on the cirumstance, it’s a combination of:

  • Positive belief, in support of some country, region, ideology, or religion;
  • Negative belief, in which a group of people (“the enemy”) are seen as despicable, inferior, or somehow deserving of destruction or punishment;
  • Peer pressure, where people feel constrained by those around them to follow through on a commitment (to become, for example, a suicide bomber);
  • Personal rage, such as a desire for revenge and humiliation;
  • Aspiration for personal glory and reward, in either the present life, or a presumed afterlife;
  • Failure of countervailing “pro-cooperation” and “pro-peace” instincts or systems.

Nothing here is new for the 2010s.  What is new is the increased ease of access, by would-be inflictors of damage, to so-called weapons of mass destruction.  There is a fair probability that the terrorists who piloted passenger jet airlines into the Twin Towers and the Pentagon would have willingly caused even larger amounts of turmoil and damage, if they could have put their hands on suitable weapons.

Technology itself is neutral.  A hammer which can be used to drive a nail into a piece of wood can equally be used to knock a fellow human unconscious.  Electricity can light up houses or fry someone in an electric chair.  Explosives can clear obstacles during construction projects or can obliterate critical infrastructure assets of so-called enemies.  Biochemical manipulation can yield wonderfully nutritious new food compounds or deadly new diseases.  Nuclear engineering can provide sufficient energy to free humanity from dependency on carbon-laden fossil fuels, or suitcase-sized portable weapons capable of tearing the heart out of major cities.

As technology becomes more widely accessible – via improved education worldwide, via cheaper raw materials, and via easy access to online information – the potential grows, both for good uses and for bad uses.  A saying attributed to Eliezer Yudkowsky gives us pause for thought:

The minimum IQ required to destroy the world drops by one point every 18 months.

(This saying is sometimes called “Moore’s Law of mad scientists“.)  The statement was probably not intended to be interpreted mathematically exactly, but we can agree that, over the course of a decade, the number of people capable of putting together a dreadful weapon of mass destruction will grow significantly.  The required brainpower will move from the rarified tails of the bell curve of intelligence distribution, in the direction of the more fully populated central region.

We can imagine similar “laws” of increasing likelihood of destructive capability:

The minimum IQ required to devise and deploy a weapon that wipes out the heart of a major city drops by one point every 18 months;

The minimum IQ required to poison the water table for a region drops by one point every 18 months;

The minimum IQ required to unleash a devastating plague drops by one point every 18 months…

Of course, the threat of nuclear annihilation has been with the world for half a century.  During my student days at Cambridge University, I participated in countless discussions about how best to avoid the risk of unintentional nuclear war.  Despite the forebodings of some of my contemporaries at the time, we reached the end of the 20th century unscathed.  Governments of nuclear-capable countries, regardless of their political hues and ideological positions, found good reason to avoid steps that could trigger any nuclear escalation.  What’s different with at least some fundamentalist terrorists is that they operate in a mental universe that is considerably more extreme:

  • They live for a life beyond the grave, rather than before it;
  • They believe that divine providence will take care of the outcome – any “innocents” caught up in the destruction will receive their own rewards in the afterlife, courtesy of an all-seeing, all-knowing deity;
  • They are nourished and inspired by apocalyptic writing that glorifies a vision of almighty destruction;
  • They operate with moral certainty: they seem to harbour no doubts or questions about the rightness of their course of action.

Mix this extreme mindset with sufficient raw brainpower and with weapons-grade materials that can be begged, bought, or stolen, and the stage is set for a terrorist outrage that will put 9/11 far into the shade.  In turn, the world’s reaction to that incident is likely to put the reaction to 9/11 far into its own shade.

It’s true, would-be terrorists are often incompetent.  Their explosives sometimes fail to detonate.  But that must give us no ground for complacency.  The same “incompetence” can sometimes result in unforeseen consequences that are even more destructive than those intended.

1.4 The sense of profound personal alienation

Environmental crisis.  Economic crisis.  Extreme terrorist crisis.  Added together, we might be facing a risk of around 50% that, sometime during the 2010s, we’ll collectively look back with enormous regret and say to ourselves:

That’s the worst thing that’s happened in our lifetime.  Why oh why didn’t we act to stop it happening?  But it’s too late to make amends now.  If only we could re-run history, and take wiser choices…

But there’s more.  Here’s a probability that I’ll estimate at 100%, rather than 50%.  It’s the probability that huge numbers of individuals will look at their lives with bitter regret, and say to themselves:

This outcome was very far from the best it could have been.  This human life has missed, by miles, the richness and quality of experience that was potentially available.  Why oh why did it turn out like this?  If only I could re-run my life, and take wiser choices, or benefit from improved circumstances…

The first three crises are global crises.  This fourth one is a personal crisis.  The first three are highly visible.  The fourth might just be an internal heartache.  It’s the realisation that:

  • Life provides, at least for some people, on at least some occasions, intense feelings of vitality, creativity, flow, rapport, ecstacy, and accomplishment;
  • These “peak experiences” are generally rare, or just glimpsed;
  • The majority of human experience is at a much lower level of quality than is conceivable.

The pervasive video broadcast communications of the modern age make it all the more obvious, to increasing numbers of people, that the quality of their lives fall short of what could be imagined and desired.  These same communications also strongly hint that technology is advancing to the point where it could soon free people from the limitations of their current existence, and enable levels of experience previously only imagined for deities.  Just around the corner lies the potential of lives that are much extended, expanded, and enhanced.  How frustrating to miss out on this potential!  It brings to mind the lamentations of a venerable French noblewoman from 1783, as noted in Lewis Lapham’s 2003 Commencement speech at St. John’s College Annapolis:

[A] French noblewoman, a duchess in her eighties, …, on seeing the first ascent of Montgolfier’s balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. “Oh yes,” she said, “Now it’s certain. One day they’ll learn how to keep people alive forever, but I shall already be dead.”

Acts of gross destruction are often motivated by deep feelings of dissatisfaction or frustration: the world is perceived as containing significant wrongs, that need righting.  So there’s a connection between the crisis of profound personal alienation and the crisis of extreme terrorism.  Thankfully, people who experience dissatisfaction or frustration don’t all react in the same way.  But even if the reaction is only (as I suggested earlier) an internal heartache, the shortcoming between potential and reality is nonetheless profound.  Life could, and should, be so much better.

We can re-state the four crises as four huge opportunities:

  1. The opportunity to nurture an amazingly pleasant, refreshing, and intriguing environment;
  2. The opportunity to guide global economic development to sustainably create sufficient resources for everyone’s needs;
  3. The opportunity to utilise personal passions for constructive projects;
  4. The opportunity to enable individuals to persistently experience qualities of human life far, far higher than at present.

I see Humanity+ as addressing all four of these opportunities.  And it does so with an eye on one more crisis, which is the most uncertain one of the lot.

1.5 The existential crisis of accelerating change and deepening complexity

Time and again, changes have consequences that are unforeseen and unintended.  The more complex the system, the greater the likelihood of changes leading to unintended consequences.

However, human society is becoming more complex all the time:

  • Multiple different cultures and sub-cultures overlap, co-exist, and influence each other;
  • Worldwide travel is nowadays commonplace;
  • Increasing numbers of channels exist for communication and influence ;
  • Society is underpinned by a rich infrastructure of multi-layered technology.

Moreover, the rate of change is increasing:

  • New products sweep around the world in ever shorter amounts of time;
  • Larger numbers of people are being educated to levels never seen before, and are entering the worlds of research, development, manufacturing, and business;
  • Online collaboration mechanisms, including social networks, wikis, and open source software, mean it is easier for innovation in one part of the world to quickly influence and benefit subsequent innovation elsewhere;
  • The transformation of more industries from “matter-dominated” to “information-dominated” means that the rapid improvement cycle of semiconductors transforms the speed of progress.

These changes bring many benefits.  They also bring drawbacks, and – due to the law of unintended consequences – they bring lots of unknowns and surprises.  The risk is that we’ll waken up one morning and realise that we deeply regret one of the unforeseen side-effects.  For example, there are risks:

  • That some newly created microscopic-scale material will turn out to have deleterious effects on human life, akin (but faster acting) to the problems arising to exposure from asbestos;
  • That some newly engineered biochemical organism will escape into the wild and turn out to have an effect like that of a plague;
  • That well-intentioned attempts at climate “geo-engineering”, to counter the risk of global warming, will trigger unexpected fast-moving geological phenomenon;
  • That state-of-the-art high-energy physics experiments will somehow create unanticipated exotic new particles that destroy all nearby space and time;
  • That software defects will spread throughout part of the computing infrastructure of modern life, rendering it useless.

Here’s another example, from history.  On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  Imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as one thousand breeding pairs.

It’s not just gargantuan explosions that we need fear.  As indicated above, the list of so-called “existential risks” includes highly contagious diseases, poisonous nano-particles, and catastrophic failures of the electronics infrastructure that underpins modern human society.  Add to these “known unknowns” the risk of “unknown unknowns” – the factors which we currently don’t even know that we should be considering.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  There’s a great deal that deserves our attention.  How should we respond?

>> Next chapter >>

« Newer PostsOlder Posts »

Blog at WordPress.com.