dw2

3 January 2011

Some memorable alarm bugs I have known

Filed under: Apple, Psion, usability — David Wood @ 12:24 pm

Here’s how the BBC website broke the news:

iPhone alarms hit by New Year glitch

A glitch on Apple’s iPhone has stopped its built-in alarm clock going off, leaving many people oversleeping on the first two days of the New Year.

Angry bloggers and tweeters complained that they had been late for work, and were risking missing planes and trains.

My first reaction was incredulity.  How could such a first class software engineering company like Apple get such basic functionality wrong?

I remember being carefully instructed, during my early days as a young software engineer with PDA pioneer Psion, that alarms were paramount.  Whatever else your mobile device might be doing at the time – however busy or full or distracted it might be – alarms had to go off when they became due.  Users were depending on them!

For example, even if the battery was too low, when the time came, to power the audio clip that a user had selected for an alarm, Psion’s EPOC operating system would default to a rasping sound that could be played with less voltage, but which was still loud enough that the user would notice.

Further, the startup sequence of a Psion device would take care to pre-allocate sufficient resources for an alarm notifier – both in the alarm server, and in the window server that would display the alarm.  There must be no risk of running out of memory and, therefore, not being able to operate the alarm.

However, as I thought more, I remembered various alarm bugs in Psion devices.

Note: I’ve probably remembered some of the following details wrong – but I think the main gist of the stories is correct.

Insisting on sounding ALL the alarms

The first was from before I started at Psion, but was a legend that was often discussed. It applied to the alarm functionality in the Psion Organiser II.

On that device, all alarms were held in a queue, and for each alarm, there was a record of whether it had been sounded.  When the device powered up, one of the first thing it would do was to check that queue for the first alarm that had not been sounded.  If it was overdue, it should be sounded immediately.  Once that alarm was acknowledged by the user, the same process should be repeated – find the next alarm that had not been sounded…

But the snag in this system became clear when the user manually advanced the time on the device (for example, on changing timezone, or, more dramatically, restoring the correct time after a system restart).  If a user had set a number of alarms, the device would insist on playing them all, one by one.  The user had no escape!

Buffer overflow (part one)

The next story on my list came to a head on a date something like the 13th of September 1989.  The date is significant – it was the first Wednesday (the day with the longest name) with a two-digit day-in-month in September (the month with the longest name).  You can probably guess how this story ends.

At that time, Psion engineers were creating the MC400 laptop – a device that was in many ways ahead of its time.  (You can see some screenshots here – though none of these shots feature the MC Alarms application.  My contribution to that software, by the way, included the Text Processor application, as well as significant parts of the UI framework.)

On the day in question, several of the prototype MC400 devices stopped working.  They’d all been working fine over the previous month or so.  Eventually we spotted the pattern – they all had alarms due, but the text for the date overflowed the pre-allocated memory storage that had been set aside to compose that text as it was displayed on the screen.  Woops.

“The kind of bug that other operating systems can only dream about”

Some time around 1991 I made a rash statement, which entered into Psion’s in-house listing of ill-guarded comments: “This is the kind of bug that other operating systems can only dream about”.  It was another alarms bug – this time in the Psion Series 3 software system.

It arose when the user had an Agenda file on a memory card (which were known, at the time, as SSDs – Solid State Disks), but had temporarily removed the card.  When the time came to sound an alarm from the Agenda, the alarm server requested the Agenda application to tell it when the next Agenda alarm would be due.  This required the Agenda application to read data from the memory card.  Because the file was already marked as “open”, the File Server in the operating system tried to display a low-level message on the screen – similar to the “Retry, Abort, or Cancel” message that users of MS-DOS might remember.  This required action from the Window Server, but the Window Server was temporarily locked, waiting for a reply from the Alarm Server.  The Alarm Server was in turn locked, waiting for the File Server – which, alas, was waiting (as previously mentioned) for the Window Server.  Deadlock.

Well, that’s as much as I can recall at the moment, but I do remember it being said at the time that the deadlock chain actually involved five interconnecting servers, so I may have forgotten some of the subtleties.  Either way, the result was that the entire device would freeze.  The only sign of life was that the operating system would still emit keyclicks when the user pressed keys – but the Window Server was unable to process these keys.

In practice, this bug would tend to strike unsuspecting users who had opened an SSD door at the time the alarm happened to be due – even the SSD door on the other side of the device (an SSD could be inserted on each side).  The hardware was unable to read from one SSD, even if it was still in place, if the other door happened to be open.  As you can imagine, this defect took some considerable time to track down.

“Death city Arizona”

At roughly the same time, an even worse alarms-related bug was uncovered.  In this case, the only way out was a cold reset, that lost all data on internal memory.  The recipe to obtain the bug went roughly as follows:

  • Supplement the built-in data of cities and countries, by defining a new city, which would be your home town
  • Observe that the operating system created a file “World.wld” somewhere on internal memory, containing the details of all the cities whose details you had added or edited
  • Find a way to delete that file
  • Restart the device.

In those days of limited memory, every extra server was viewed as an overhead to be avoided if possible.  For this reason, the Alarm Server and the World Server coexisted inside a single process, sharing as many resources as possible.  The Alarm Server managed the queue of alarms, from all different applications, and the World Server looked after access to the set of information about cities and countries.  For fast access during system startup, the World Server stored some information about the current home city.  But if the full information about the home city couldn’t be retrieved (because, for example, the user had deleted the World.wld file), the server went into a tailspin, and crashed.  The lower level operating system, noticing that a critical resource had terminated, helpfully restarted it – with identical conclusions.  Result: the lower priority applications and servers never had a chance to start up.  The user was left staring at a blank screen.

Buffer overflow (part two)

The software that composed the text to appear on the screen, when an alarm sounded, used the EPOC equivalent of “print with formatting”.  For example, a “%d” in the text would be replaced by a numerical value, depending on other parameters passed to the function.  Here, the ‘%’ character has a special meaning.

But what if the text supplied by the user itself contains a ‘%’ character?  For example, the alarm text might be “Revision should be 50% complete by today”.  Well, in at least some circumstances, the software went looking for another parameter passed to it, where none existed.  As you can imagine, all sorts of unintended consequences could result – including memory overflows.

Alarms not sounding!

Thankfully, the bugs above were all caught by in-house testing, before the device in question was released to customers.  We had a strong culture of fierce internal testing.  The last one, however, did make it into the outside world.  It impacted users who had the temerity to do the following:

  • Enter a new alarm in their Agenda
  • Switch the device off, before it had sufficient time to complete all its processing of which alarm would be the next to sound.

This problem hit users who accumulated a lot of data in their Agenda files.  In such cases, the operating system could take a non-negligible amount of time to reliably figure out what the next alarm would be.  So the user had a chance to power down the device before it had completed this calculation.  Given the EPOC focus on keeping the device in a low-power state as much as possible, the “Off” instruction was heeded quickly – too quickly in this case.  If the device had nothing else to do before that alarm was due, and if the user didn’t switch on the device for some other reason in the meantime, it wouldn’t get the chance to work out that it should be sounding that alarm.

Final thoughts re iPhone alarms

Psion put a great deal of thought into alarms:

  • How to implement them efficiently
  • How to ensure that users never missed alarms
  • How to provide the user with a great alarm experience.

For example, when an alarm becomes due on a Psion device, the sound starts quietly, and gradually gets louder.  If the user fails to acknowledge the alarm, the entire sequence repeats, after about one minute, then after about three minutes, and so on.  When the user does acknowledge the alarm, they have the option to stop it, silence it, or snooze it.  Pressing the snooze button adds another five minutes to the time before the alarm will sound again.  Pressing it three times, therefore, adds 15 minutes, and so on.  (And as a touch of grace: if you press the snooze button enough times, it emits a short click, and resets the time delay to five minutes – useful for sleepyheads who are too tired to take a proper look at the device, but who have enough of a desire to monitor the length of the snooze!)

So it’s surprising to me that Apple, with its famous focus on user experience, seem to have given comparatively little thought to the alarms on that device.  When my wife started using an iPhone in the middle of last year, she found much in it to enchant her – but the alarms were far from delightful.  It seems that the default alarms sound only once, with a rather pathetic little noise which it is easy to miss.  And when we looked, we couldn’t find options to change this behaviour.  I guess the iPhone team has other things on its mind!

31 December 2010

Welcome 2011 – what will the future hold?

Filed under: aging, futurist, Humanity Plus, intelligence, rejuveneering — David Wood @ 6:42 pm

As 2010 turns into 2011, let me offer some predictions about topics that will increasingly be on people’s minds, as 2011 advances.

(Spoiler: these are all topics that will feature as speaker presentations at the Humanity+ UK 2011 conference that I’m organising in London’s Conway Hall on 29th January.  At time of writing, I’m still waiting to confirm possibly one or two more speakers for this event, but registration is already open.)

Apologies for omitting many other key emerging tech-related trends from this list.  If there’s something you care strongly about – and if you live within striking distance of London – you’ll be more than welcome to join the discussion on 29th January!

28 December 2010

Some suggested books for year-end reading

Looking for suggestions on books to read, perhaps over the year-end period of reflection and resolution for renewal?

Here are my comments on five books I’ve finished over the last few months, each of which has given me a lot to think about.

Switch: How to change things when change is hard – by Chip & Dan Heath

I had two reasons for expecting I would like this book:

I was not disappointed.  The book is full of advice that seems highly practical – advice that can be used to overcome all kinds of obstacles that people encounter when trying to change something for the better.  The book helpfully lists some of these obstacles in a summary chapter near its end.  They include:

  • “People here don’t see the need for change”
  • “People resist my idea because they say, ‘We’ve never done it like that before'”
  • “We should do doing something, but we’re getting bogged down in analysis”
  • “The environment has shifted, and we need to overcome our old patterns of behaviour”
  • “People here simply aren’t motivated to change”
  • “People here keep saying ‘It will never work'”
  • “I know what I should be doing, but I’m not doing it”
  • “I’ll change tomorrow”…

Each chapter has profound insights.  I particularly liked the insight that, from the right perspective, the steps to create a solution are often easier than the problem itself.  This is a pleasant antidote to the oft-repeated assertion that solutions need to be more profound, more complex, or more sophisticated, that the problems they address.  On the contrary, change efforts frequently fail because the change effort is focussing on the wrong part of the big picture.  You can try to influence either the “rider”, the “elephant”, or the “path” down which the elephant moves.  Spend your time trying to influence the wrong part of this combo, and you can waste a great deal of energy.  But get the analysis right, and even people who appear to hate change can embrace a significant transformation.  It all depends on the circumstance.

The book offers nine practical steps – three each for the three different parts of this model:

  • Direct the rider: Find the bright spots; Script the critical moves; Point to the destination
  • Motivate the elephant: Find the feeling; Shrink the change; Grow your people
  • Shape the path: Tweak the environment; Build habits; Rally the herd.

These steps may sound trite, but these simple words summarise, in each case, a series of inspirational examples of real-world change.

The happiness advantage: The seven principles of positive psychology that fuel success and performance at work – by Shawn Achor

“The happiness advantage” shares with “Switch” the fact that it is rooted in the important emerging discipline of positive psychology.  But whereas “Switch” addresses the particular area of change management, “The happiness advantage” has a broader sweep.  It seeks to show how a range of recent findings from positive psychology can be usefully applied in a work setting, to boost productivity and performance.  The author, Shawn Achor, describes many of these findings in the context of the 10 years he spent at Harvard.  These findings include:

  • Rather than the model in which people work hard and then achieve success and then become happy, the causation goes the other way round: people with a happy outlook are more creative, more resilient, and more productive, are able to work both harder and smarter, and are therefore more likely to achieve success in their work (Achor compares this reversal of causation to the “Copernican revolution” which saw the sun as the centre of the solar system, rather than the earth)
  • Our character (including our degree of predisposition to a happy outlook) is not fixed, but can be changed by activity – this is an example of neural plasticity
  • “The Tetris effect”: once you train your brain to spot positive developments (things that merit genuine praise), that attitude increasingly becomes second nature, with lots of attendant benefits
  • Rather than a vibrant social support network being a distraction from our core activities, it can provide us with the enthusiasm and the community to make greater progress
  • “Falling up”: the right mental attitude can gain lots of advantage from creative responses to situations of short-term failure
  • “The Zorro circle”: rather than focussing on large changes, which could take a long time to accomplish, there’s great merit in restricting attention to a short period of time (perhaps one hour, or perhaps just five minutes), and to a small incremental improvement on the status quo.  Small improvements can accumulate a momentum of their own, and lead on to big wins!
  • Will power is limited – and is easily drained.  So, follow the “20 second rule”: take the time to rearrange your environment – such as your desk, or your office – so that the behaviour you’d like to happen is the easiest (“the default”).  When you’re running on auto-pilot, anything that requires a detour of more than 20 seconds is much less likely to happen.  (Achor gives the example of taking the batteries out of his TV remote control, to make it less likely he would sink into his sofa on returning home and inadvertently watch TV, rather than practice the guitar as he planned.  And – you guessed it – he made sure the guitar was within easy reach.)

You might worry that this is “just another book about the power of positive thinking”.  However, I see it as a definite step beyond that genre.  This is not a book that seeks to paint on a happy face, or to pretend that problems don’t exist.  As Achor says, “Happiness is not the belief that we don’t need to change.  It is the realization that we can”.

Nonsense on stilts: how to tell science from bunk – by Massimo Pigliucci

Many daft, dangerous ideas are couched in language that sounds scientific.  Being able to distinguish good science from “pseudoscience” is sometimes called the search for a “demarcation principle“.

The author of this book, evolutionary biologist Massimo Pigliucci, has strong views about the importance of distinguishing science from pseudoscience.  To set the scene, he gives disturbing examples such as people who use scientific-sounding language to deny the connection between HIV and AIDS (and who often advocate horrific, bizarre treatments for AIDS), or who frighten parents away from vaccinating their children by quoting spurious statistics about links between vaccination and autism.  This makes it clear that the subject is far from being an academic one, just for armchair philosophising.  On the other hand, attempts by philosophers of science such as Karl Popper to identify a clear, watertight demarcation principle all seem to fail.  Science is too varied an enterprise to be capable of a simple definition.  As a result, it can take lots of effort to distinguish good science from bad science.  Nevertheless, this effort is worth it.  And this book provides a sweeping, up-to-date survey of the issues that arise.

The book brought me back to my own postgraduate studies from 1982-1986.  My research at that time covered the philosophy of mind, the characterisation of pseudo-science, creationism vs. Darwinism, and the shocking implications of quantum mechanics.  All four of these areas were covered in this book – and more besides.

It’s a book with many opinions.  I think it gets them about 85% right.  I particularly liked:

  • His careful analysis of why “Intelligent Design” is bad science
  • His emphasis on how pseudoscience produces no new predictions, but is intellectually infertile
  • His explanation of the problems of parapsychology (studies of extrasensory perception)
  • The challenges he lays down to various fields which appear grounded in mainstream science, but which are risking divergence away from scientific principles – fields such as superstring theory and SETI (the search for extraterrestrial intelligence).

Along the way, Pigliucci shares lots of fascinating anecdotes about the history of science, and about the history of philosophy of science.  He’s a great story-teller.

The master switch: the rise and fall of information empires – by Tim Wu

Whereas “Nonsense on stilts” surveys the history of science, and draws out lessons about the most productive ways to continue to find out deeper truths about the world, “The master switch” surveys many aspects of the modern history of business, and draws out lessons about the most productive ways to organise society so that information can be shared in the most effective way.

The author, Tim Wu, is a professor at Columbia Law School, and (if anything) is an even better story-teller than Pigliucci.  He gives rivetting accounts of many of the key episodes in various information businesses, such as those based on the telephone, radio, TV, cinema, cable TV, the personal computer, and the Internet.  Lots of larger-than-life figures stride across the pages.  The accounts fit together as constituents of an over-arching narrative:

  • Control over information technologies is particularly important for the well-being of society
  • There are many arguments in favour of centralised control, which avoids wasteful inefficiencies of competition
  • Equally, there are many arguments in favour of decentralised control, with open access to the various parts of the system
  • Many information industries went through one (or more phases) of decentralised control, with numerous innovators working independently, before centralisation took place (or re-emerged)
  • Government regulation sometimes works to protect centralised infrastructure, and sometimes to ensure that adequate competition takes place
  • Opening up an industry to greater competition often introduces a period of relative chaos and increased prices for consumers, before the greater benefits of richer innovation have a chance to emerge (often in unexpected ways)
  • The Internet is by no means the first information industry for which commentators had high, idealistic hopes: similar near-utopian visions also accompanied the emergence of broadcast radio and of cable television
  • A major drawback of centralised control is that too much power is vested in just one place – in what can be called a “master switch” – allowing vested interests to drastically interfere with the flow of information.

AT&T – the company founded by Bell – features prominently in this book, both as a hero, and as a villain.  Wu describes how AT&T suppressed various breakthrough technologies (including magnetic disk recording, usable in answering machines) for many years, out of a fear that they would damage the company’s main business.  Similarly, RCA suppressed FM radio for many years, and also delayed the adoption of electronic television.  Legal delays were often a primary means to delay and frustrate competitors, whose finances lacked such deep pockets.

Wu often highlights ways in which business history could have taken different directions.  The outcome that actually transpired was often a close-run thing, compared to what seemed more likely at the time.  This emphasises the contingent nature of much of history, rather than events being inevitable.  (I know this from my own experiences at Symbian.  Recent articles in The Register emphasise how Symbian nearly died at birth, well before powering more than a quarter of a billion smartphones.  Other stories, as yet untold, could emphasise how the eventual relative decline of Symbian was by no means a foretold conclusion either.)

But the biggest implications Wu highlights are when the stories come up to date, in what he sees as a huge conflict between powers that want to control modern information technology resources, and those that prefer greater degrees of openness.  As Wu clarifies, it’s a complex landscape, but Apple’s iPhone approach aims at greater centralised design control, whereas Google’s Android approach aims at enabling a much wider number of connections – connections where many benefits arise, without the need to negotiate and maintain formal partnerships.

Compared to previous information technologies, the Internet has greater elements of decentralisation built into it.  However, the lessons of the previous chapters in “The master switch” are that even this decentralisation is vulnerable to powerful interests seizing control and changing its nature.  That gives greater poignancy to present-day debates over “network neutrality” – a term that was coined by Wu in a paper he wrote in 2002.

Sex at dawn: the prehistoric origins of modern sexuality – by Christopher Ryan and Cacilda Jetha

(Sensitive readers should probably stop reading now…)

In terms of historical sweep, this last book outdoes all the others on my list.  It traces the origins of several modern human characteristics far into prehistory – to the time before agriculture, when humans existed as nomadic hunter-gatherers, with little sense of personal exclusive ownership.

This book reminds me of this oft-told story:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

I’ve read a lot on evolution over the years, and I think the evidence husband and wife authors Christopher Ryan and Cacilda Jetha accumulate chapter after chapter, in “Sex at dawn”, is reasonably convincing – even though elements of present day “polite society” may well prefer this evidence not to become “generally known”.  The authors tell a story with many jaw-dropping episodes.

Among other things, the book systematically challenges the famous phrase from Thomas Hobbes in Leviathan that, absent a government, people would lead lives that were “solitary, poor, nasty, brutish, and short”.  On the contrary, the book marshals evidence, direct and indirect, that pre-agricultural people could enjoy relatively long lives, with ample food, and a strong sense of community.  Key to this mode of existence was “fierce sharing”, in which everyone felt a strong obligation to share food within the group … and not only food.  The X-rated claim in the book is that the sharing extended to “parallel multi-male, multi-female sexual relationships”, which bolstered powerful community identities.  Monogamy is, therefore, far from being exclusively “natural”.  Evidence in support of this conclusion includes:

  • Comparisons to behaviour in bonobos and chimps – the apes which are our closest evolutionary cousins
  • The practice in several contemporary nomadic tribes, in which children are viewed as having many fathers
  • Various human anatomical features, copulatory behaviour, aspects of sperm wars, etc.

In this analysis, human sexual nature developed under one set of circumstances for several million years, until dramatic changes in relatively recent times with the advent of agriculture, cities, and widespread exclusive ownership.  Social philosophies (including religions) have sought to change the norms of behaviour, with mixed success.

I’ll leave the last words to Ryan and Jetha, from their online FAQ:

We’re not recommending anything other than knowledge, introspection, and honesty. In fact, as we say in the book, we’re not really sure what to do with this information ourselves.

5 December 2010

How do you cure an E72 with hiccups?

Filed under: Nokia, YouTube — David Wood @ 1:28 am

This video isn’t going to win any awards.  It’s only 13 seconds long, and is hyper-grainy.  But if you peer closely, you can see my Nokia E72 displaying a bizarre kind of visual hiccups.

A brief bit of history: the E72 unfortunately became waterlogged.  (Ahem.  Old habits die hard.)  I took out the battery immediately, and left everything to dry out in an airing cupboard.  After putting the battery back in and restarting the E72, things initially looked fine.  The device booted OK, and I could start navigating around the applications.

But about one minute after booting up, the display starts doing the kind of weird vertical jitter you can see in the video.

This display malfunction reminds me of my childhood days, when TVs would sometimes experience problems with their “vertical hold”.  In that bygone era, there was usually a “vertical hold” button you could twiddle on the back of the set, to fix that problem.  (Note to younger readers: this was before the advent of TV remote controllers.)  However, although the E72 has lots of keys and buttons, none of them is labelled “vertical hold”.

It also reminds me of one more thing.  This kind of vertical jitter is, sometimes, part of the normal display on my E72.  But it usually only happens once at a time, rather than getting stuck in a loop.

Does anyone have any idea what causes this vertical jitter?

I’m hoping for a more precise answer than “water damage”.  I think there must be at least some software aspect to it:

  • The jittering doesn’t start immediately when the device boots, but only after a delay.  It looks like it’s triggered by some of background software process, which eventually kicks in
  • The speed of the jitter changes, depending on what else you do with the device.  For example, if you start a new app, the jittering temporarily stops, but then restarts
  • Whenever the jitter is occurring, the multi-coloured Nokia rotating “busy indicator” icon (I think that’s what it’s called) is just about visible on the title bar, suggesting that the device is trying to do something.

I wondered if there was anything in my own phone’s setup (e.g. the apps I had installed) that might, somehow, be causing this behaviour.  So I went back to the factory settings.  However, this didn’t cure the hiccups.

Almost certainly, I’m going to have to give up on using this particular device, but before I reach that outcome, I’m hoping to find a way to stop this behaviour!

In the meantime, I’ve been struggling to use an N900 as my primary smartphone.  It’s an interesting experimental devices, but it’s miles away from being ready for main-time usage.

Added later: Thanks to @taike_hk for suggesting the use of a microscope, distilled water, alcohol, and a hairdrier. (But I don’t particularly relish the thought of disassembling my E72…)

15 October 2010

Radically improving nature

Filed under: death, evolution, UKH+ — David Wood @ 10:50 pm

The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man – George Bernard Shaw

Changing the world is ambitious.  Changing nature is even more ambitious.

After all, nature is the output of countless generations of refinement by natural selection.  Evolution has found many wonderful solutions.  But natural selection generally only finds local optima.  As I’ve written on a previous occasion:

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

And as I covered in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind”:

The basic claim of the book is that many aspects of the human mind operate in clumsy and suboptimal ways – ways which betray the haphazard and often flawed evolutionary history of the mind….

The framework is, to me, both convincing and illuminating.  It provides a battery of evidence relevant to what might be called “The Nature Delusion” – the pervasive yet often unspoken belief that things crafted by nature are inevitably optimal and incapable of serious improvement.

For these reasons, I applaud thoughtful attempts to improve human nature – whether by education, meditation, diet and smart drugs, silicon co-processors for our biological brains, genetic re-engineering, and so on.  With sufficient overall understanding, we can use the best outputs of human thought to create even better humans.

But what about the rest of nature?  If we can consider creating better humans, what about creating better animals? If the technology of the near future can add 50 points, or more, to our human IQs, could we consider applying similar technological enhancements to dolphins, dogs, parrots, and so on?

There are various motivations to considering this question.  First, there are people who deeply love their pets, and who might wish to enhance the capabilities of their pets, in a manner akin to enhancing the capabilities of their children.  Someone might wonder, if my dog could speak to me, what would it say?

In a way, the experiments to teach chimps sign language already take steps down this direction.  (Some chimps that learned sign language seem in turn to have taught elements of it to their own children.)

A different motivation to consider altering animal nature is the sheer amount of horrific pain and trauma throughout the animal kingdom.  Truly is “nature, red in tooth and claw“.

In his essay “The end of suffering“, British philosopher David Pearce quotes Richard Dawkins from the 1995 book River Out of Eden: A Darwinian View of Life:

During the minute it takes me to compose this sentence, thousands of animals are being eaten alive; others are running for their lives, whimpering with fear; others are being slowly devoured from within by rasping parasites; thousands of all kinds are dying from starvation, thirst and disease. It must be so. If there is ever a time of plenty, this very fact will automatically lead to an increase in population until the natural state of starvation and misery is restored.

But Pearce takes issue with Dawkins:

“It must be so.” Is Richard Dawkins right? Are the cruelties of the food chain an inescapable fact of Nature: no more changeable than, say, Planck’s constant or the Second Law of Thermodynamics? The Transhumanist Declaration expresses our commitment to the “well-being of all sentience”. Yet do these words express merely a pious hope – or an engineering challenge?

My own recent work involves exploring some of the practical steps entailed by compassionate ecosystem redesign – cross-species immunocontraception, genomic rewrites, cultured meat, neurochips, global surveillance and wildlife tracking technologies, and the use of nanorobots for marine ecosystems. Until this century, most conceivable interventions to mitigate the horrors of Nature “red in tooth and claw” would plausibly do more harm than good. Rescue a herbivore [“prey”] and a carnivore [“predator”] starves. And if, for example, we rescue wild elephants dying from hunger or thirst, the resultant population explosion would lead to habitat degradation, Malthusian catastrophe and thus even greater misery. Certainly, the computational power needed to micromanage the ecosystem of a medium-sized wildlife park would be huge by today’s standards. But recall that Nature supports only half a dozen or so “trophic levels”; and only a handful of “keystone predators” in any given habitat. Creating a truly cruelty-free living world may cost several trillion dollars or more. But the problem is computationally tractable within this century – if we acknowledge that wild animal suffering matters.

David’s fan page on Facebook boldly includes the forecast:

“I predict we will abolish suffering throughout the living world”

Unreasonable? Probably. Scientifically credible? Perhaps. Noble? Definitely. Radical? This is about as radical as it gets. Thoughtful? Read David’s own writings and make up your own mind.

Alternatively, if you’re in or nearby London, come along to this month’s UKH+ meeting (tomorrow, Saturday 16th October), where David will be the main speaker.  He wrote the following words to introduce what he’ll be talking about:

The Transhumanist Declaration advocates “the well-being of all sentience, including humans, non-human animals, and any future artificial intellects, modified life forms, or other intelligences to which technological and scientific advance may give rise.” Yet is “the well-being of all sentience” serious science – or just utopian dreaming? What does such a commitment entail? On what kind of realistic timeframe might we command enough computational power to police an entire ecosystem?

In this talk, the speaker wants to review recent progress in understanding the neurobiology of pleasure, pain and our core emotions. Can mastery of our reward circuitry ever deliver socially responsible, intelligent bliss rather than crude wireheading? He also wants to examine and respond to criticisms of the abolitionist project that have been levelled over the past decade – and set out the biggest challenges, as he sees them, to the prospect of a totally cruelty-free world.

10 October 2010

Call for speakers: Humanity+ UK2011

Filed under: Events, Humanity Plus, UKH+ — David Wood @ 2:18 pm

Although I haven’t allocated much time over the last few months to organising Humanity+ activities, I still assist the organisation on an occasional basis.

Earlier today, I issued a “call for speakers” for the January 2011 Humanity+ UK conference that will be taking place on Saturday 29 January 2011, in London’s Conway Hall.

Here’s a summary of the call:

Submissions are requested for talks lasting no more than 20 minutes on the general theme of Making a human difference. Submissions should address one or more of the follow sub-themes:

  1. Technology that enhances humans
  2. Existential risks: the biggest human difference
  3. Citizen activism in support of Humanity+
  4. Humanity vs. Humanity+: criticisms and renewal
  5. Roadmapping the new human future.

Submissions need not be lengthy – around the equivalent of one page of A4 material should be sufficient. They should cover:

  • Proposed title of the talk, and which of the above sub-themes apply to it
  • Brief description of the talk
  • Brief description of the speaker
  • An explanation of why the presentation will provide value to the expected audience.

The 20 minute limit on the length of presentations is intended to ensure that speakers focus on communicating their most important messages. It will also allow a larger number of speakers (and, hence, a larger number of points of view to be considered during the day).

A small number of speakers will also be invited to take part in panel Q&A discussions. These will be decided nearer the time of the conference.

Speaker submissions should be emailed as soon as possible to humanityplusuk AT gmail DOT com.

Speaker slots will be allocated as soon as good submissions are received, and announced on the conference blog. The call for submissions will be closed once there are no available speaking slots left.

Note: at this conference, all speakers will be required to provide slides (e.g. PowerPoint) to accompany their presentation. Speakers who fail to provide their slides to the organisers at least 48 hours before the start of the conference will be removed from the programme.

The organisers also regret that no speaker expenses, fees, or honoraria can be paid. However, speakers will receive free registration for the conference.

Footnote: For background, here’s the site for the corresponding 2010 conference, which attracted an audience of just under 200 people.

The 10 10 10 vision

Filed under: BHAG, leadership, Symbian, vision — David Wood @ 10:19 am

The phrase “10 10 10” first entered my life at a Symbian Leadership Team offsite, held in Tylney Hall in Hampshire, in early January 2007.  We were looking for a memorable new target for Symbian.

A few months earlier, in November 2006, cumulative sales of Symbian-powered phones had passed the milestone of 100 million units, and quarterly sales were continuing to grow steadily.  It was therefore a reasonable (but still bold) extrapolation for Nigel Clifford, Symbian’s CEO, to predict:

The first 100 million took 8 years [from Symbian’s founding, in June 1998],  the next 100 million will take under 80 weeks

That forecast was shared with all Symbian employees later in the month, as we gathered in London’s Old Billingsgate Hall for the annual Kick Off event.  Nigel’s kick off speech also outlined the broader vision adopted by the Leadership Team at the offsite:

By 2010 we want to be shipping 10 million Symbian devices per month

If we do that we will be in 1 in 10 mobile phones shipping across the planet

So … 10 10 10

Fast forward nearly four years to the 10th of October, 2010 – to 10/10/10.  As I write these words at around 10 minutes past 10 o’clock, how did that vision turn out?

According to Canalys figures reported by the BBC, just over 27 million Symbian-powered devices were sold during Q2 2010:

Worldwide smartphone market

OS Q2 2010 shipments % share Q2 2009 shipments % share Growth
Symbian 27,129,340 43.5 19,178,910 50.3 41.5
RIM 11,248,830 18.0 7,975,950 20.9 41
Android 10,689,290 17.1 1,084,240 2.8 885.9
Apple 8,411,910 13.5 5,211,560 13.7 61.4
Microsoft 3,083,060 4.9 3,431,380 9.0 -10.2
Others 1,851,830 3.0 1,244,620 3.3 48.8
Total 62,414,260 100 38,126,660 100 63.3

Dividing by three, that makes just over 9 million units per month in Q2, which is marginally short of this part of the target.

But more significantly, Symbian failed by some way to have the mindshare, in 2010, that the 2007 Leadership Team aspired to.  As the BBC report goes on to say:

Although Symbian is consistently the most popular smart phone operating system, it is often overshadowed by Apple’s iPhone and Google Android operating system.

I’m a big fan of audacious goals – sometimes called BHAGs.  The vision that Symbian would become the most widely used and most widely liked software platform on the planet, motivated me and many of my colleagues to prodigious amounts of hard work over many years.

In retrospect, were these BHAGs misguided?  It’s too early to tell, but I don’t think so. Did we make mistakes along the way?  Absolutely. Should Symbian employees, nevertheless, take great pride in what Symbian has accomplished?  Definitely. Has the final chapter been written on smartphones?  No way!

But as for myself, my vision has evolved.  I’m no longer a “Symbian smartphone enthusiast”.  Instead, I’m putting my energies into being a “smartphone technology enthusiast“.

I don’t yet have a new BHAG in mind that’s as snappy as either “10 10 10” or “become the most widely used and most widely liked software platform on the planet”, but I’m working on it.

The closest I’ve reached so far is “smartphone technology everywhere“, but that needs a lot of tightening.

Footnote: As far as I can remember, the grainy photo below is another remnant of the Symbian Leadership Team Jan 2007 Tylney Hall offsite.  (The helmets and harnesses were part of a death-defying highwire team-building exercise.  We all lived to tell the tale.)

(From left to right: Standing: Andy Brannan, Charles Davies, Nigel Clifford, David Wood, Kent Eriksson, Kathryn Hodnett, Thomas Chambers, Jorgen Behrens; Squatting: Richard Lowther, Stephen Williams.)

9 October 2010

On smartphones, superphones, and subphones

What comes next after smartphones?

There’s big league money in smartphones.  In 2009, around 173 million smartphones were sold worldwide.  IDC predicts this figure will jump to nearly 270 million in 2010.  According to Informa, that represents about 27% of the total mobile phone unit sales in 2010.  But as Informa also point out, it represents around 55% of total market value (because of their high average selling price), and a whopping 64% of the mobile phone market’s profits.

As well as big money from sales of smartphones themselves, there’s big money in sales of applications for smartphones.  A recent report from Research2Guidance evaluates the global smartphone application market as being worth $2.2 (£1.4) billion during the first half of 2010, already surpassing the total value of $1.7 (£1.1) billion for all 12 months of 2009.

  • What’s next? If there’s so much money in the rapidly evolving smartphone market, where will the underlying wave of associated technological and commercial innovation strike next?  Answer that question correctly, and you might have a chance to benefit big time.

Three answers deserve attention.

1. More smartphones

The first answer is that the smartphone market is poised to become larger and larger.  The current spurt of growth is going to continue.  More and more people are going to be using smartphones, and more and more people will be downloading and using more and more applications.  This growth will be driven by:

  • Decreasing costs of smartphone devices
  • Improved network connectivity
  • An ever-wider range of different applications, tailored to individual needs of individual mobile consumers
  • Improved quality of applications, networks, and devices – driven by fierce competition
  • Burgeoning word-of-mouth recommendations, as people tell each other about compelling mobile services that they come across.

Perhaps one day soon, more than 50% of all mobile phones will be built using smartphone technology.

2. Superphones

The second answer is that smartphones are going to become smarter and more capable.  The improvements will be so striking that the phrase “smartphone” won’t do them justice.  Google used a new term, “superphone”, when it introduced the Nexus One device:

Nexus One is an exemplar of what’s possible on mobile devices through Android — when cool apps meet a fast, bright and connected computer that fits in your pocket. The Nexus One belongs in the emerging class of devices which we call “superphones”. It’s the first in what we expect to be a series of products which we will bring to market with our operator and hardware partners and sell through our online store.

Blogger Stasys Bielinis of UnwiredView takes up the analysis in his recent thought-provoking article, “Nokia’s doing OK in smartphones. It’s superphones where Apple and Google Android are winning”:

Smartphones and superphones share some common characteristics – always on connectivity, ability to make phone calls and send SMS/MMS, access the internet and install third party software apps.  But the ways these devices are used are very different – as different as iPads/tablets are different from laptops/netbooks.

The main function of a smartphone – is a mobile phone.  You use it primarily to do voice calls and send/receive short text messages via SMS/MMS.  Yes, your smartphone can do a lot more things – take pictures, browse the Web, play music, stream audio/video from the net, make use of various third-party apps.  But you use those additional functions only when you really need it, or there’s no better option than a device in your pocket, or when there’s some particularly interesting mobile service/app that requires your attention – e.g. Facebook, Twitter, Foursquare, or other status updaters.   But they are secondary functions for your smartphone. And, due to the design limitations – small displays, crammed keypads/keyboards, button navigation, etc – using those additional “smart” capabilities is a chore…

Superphones, on the other hand, are not phones anymore. They are truly small mobile computers in your pocket, with phone/texting as just another app among many. The user experience – big displays, (multi) touch, high quality browsers, etc – is optimized to transfer big screen PC interaction models to the limitations of mobile device that can fit in your pocket. While the overall experience doing various things on your superphone is a bit worse than doing those same things on your laptop, it’s not much worse, and is actually good enough for the extensive use on the go…

There’s scope to quibble with the details of this distinction.  But there’s merit in the claim that the newer smartphones – whatever we call them – typically manifest a lot more of the capabilities of the computing technology that’s embedded into them.  The result is:

  • More powerful applications
  • Delivering more useful functionality.

3. Subphones

The first answer, above, is that smartphones are going to become significantly more numerous.  The second answer is that smartphones are going to become significantly more powerful.  I believe both these answers.  These answers are both easy to understand.  But there’s a third answer, which is just as true  as the first two – and perhaps even more significant.

Smartphone technology is going to become more and more widely used inside numerous types of devices that don’t look like smartphones.

These devices aren’t just larger than smartphones (like superphones).  They are different from smartphones, in all kinds of way.

If the motto “smartphones for all” drove a great deal of the development of the mobile industry during the decade 2000-2010, a new motto will become increasingly important in the coming decade: “Smartphone technology everywhere”.  This describes a new wave of embedded software:

  • Traditional embedded software is when computing technology is used inside devices that do not look like computers;
  • The new wave of embedded software is when smartphone technology is used inside devices that do not look like smartphones.

For want of a better term, we can call these devices “subphones”: the underlying phone functionality is submerged (or embedded).

Smartphone technology everywhere

The phrase “smartphone technology” is shorthand for technology (both hardware and software) whose improvement was driven by the booming commercial opportunities of smartphones.  Market pressures led to decreased prices, improved quality, and new functionality.  Here are some examples:

  • Wireless communications chips – and the associated software
  • Software that can roam transparently over different kinds of wireless network
  • Large-scale data storage and information management – both on a device, and on the cloud
  • Appealing UIs on small, attractive, hi-res graphics displays
  • Streaming mobile multimedia
  • Device personalisation and customisation
  • Downloadable and installable applications, that add real value to the base device
  • Access to the Internet while mobile, in ways that make sense on small devices
  • High performance on comparatively low-powered hardware with long battery life
  • Numerous sensors, including location, direction, motion, and vision.

The resulting improvements allow these individual components to be re-purposed for different “subphone” devices, such as:

  • Tablets and slates
  • Connected consumer electronics (such as cameras and personal navigation devices)
  • Smart clothing – sometimes called “wearable computers” – or a “personal area network”
  • Smart cars – including advanced in-vehicle infotainment
  • Smart robots – with benefits in both industrial automation and for toys
  • Smart meters and smart homes
  • Smart digital signs, that alter their display depending on who is looking at them
  • Mobile medical equipment – including ever smaller, ever smarter “micro-bots”.

By some estimates, the number of such subphones will reach into the hundreds of billions (and even beyond) within just a few short years.  As IBM have forecast,

Soon there will be 1 trillion connected devices in the world. A smarter planet will require a smarter communications infrastructure. When things communicate, systems connect. And when systems connect, the world gets smarter.

This will be an era where M2M (machine to machine) wireless communications far exceed communications directly involving humans.  We’ll be living, not just in a sea of smart devices, but inside an “Internet of Things”.

Barriers to benefits

Smartphone technologies bring many opportunities – but these opportunities are, themselves, embedded in a network of risks and issues.  Many great mobile phone companies failed to survive the transition to smartphones.  In turn, some great smartphone companies are struggling to survive the transition to superphones.  It’s the same with subphones – they’re harder than they look.  They’re going to need new mindsets to fully capitalise on them.

To make successful products via disruptive new combinations of technology typically requires more than raw technological expertise.  A broad range of other expertise is needed too:

  • Business model innovation – to attract new companies to play new roles (often as “complementors”) in a novel setup
  • Ecosystem management – to motivate disparate developers to work together constructively
  • System integration and optimisation – so that the component technologies join together into a stable, robust, useable whole
  • User experience design – to attract and retain users to new usage patterns
  • Product differentiation – to devise and deploy product variants into nearby niches
  • Agility – to respond rapidly to user feedback and marketplace learnings.

The advance of software renders some problems simpler than before.  Next generation tools automate a great deal of what was previously complex and daunting.  However, as software is joined together in novel ways with technologies from different fields, unexpected new problems spring up, often at new boundaries.  For example, the different kinds of subphones are likely to have unexpected interactions with each other, resulting in rough edges with social and business aspects as much as technological ones.

So whilst there are many fascinating opportunities in the world beyond smartphones, these opportunities deserve to be approached with care.  Choose your partners and supporters wisely, as you contemplate these opportunities!

Footnote 1: For some vivid graphics illustrating the point that companies who excel in one era of mobile technology (eg traditional mobile phones) sometimes fail to retain their profit leadership position in a subsequent era (eg superphones), see this analysis by Asymco.

Footnote 2: On the “superphone” terminology:

It wasn’t Google that invented the term “superphone”.  Nokia’s N95 was the first phone to be widely called a superphone – from around 2006.  See eg here and here.

In my own past life, I toyed from time to time with the phrase “super smart phone” – eg in my keynote address to the 2008 Mobile 2.0 event in San Francisco.

Footnote 3: I look forward to discussing some of these topics (and much more besides) with industry colleagues, both old and new, at a couple of forthcoming conferences which I’ll be attending:

  • SEE10 – the Symbian Expo and Exchange – in Amsterdam, Nov 9-10
  • MeeGo Conference – in Dublin, Nov 13-15.

In each case, I’ll be part of the Accenture Embedded Software Services presence.

19 September 2010

Our own entrenched enemies of reason

Filed under: books, deception, evolution, intelligence, irrationality, psychology — David Wood @ 3:39 pm

I’m a pretty normal, observant guy.  If there was something as large as an elephant in that room, then I would have seen it – sure as eggs are eggs.  I don’t miss something as large as that.  So someone who says, afterwards, that there was an elephant there, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

Here’s another version of the same, faulty, line of reasoning:

I’m a pretty good police detective.  Over the years, I’ve developed the knack of knowing when people are telling the truth.  That’s what my experience has taught me.  I know when a confession is for real.  I don’t get things like that wrong.  So someone who says, afterwards, that the confession was forced, or that the criminal should get off on a technicality, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

And another:

I’m basically a moral person.  I don’t knowingly cause serious harm to my fellow human beings.  I don’t get things as badly wrong as that.  I’m not that kind of person.  So if undeniable evidence subsequently emerges that I really did seriously harm a group of people, well, these people must have deserved it.  They were part of a bad crowd.  I was actually doing society a favour.  Gosh, don’t you know, I’m one of the good guys.

Finally, consider this one:

I’m basically a savvy, intelligent person.  I don’t make major errors in reasoning.  If I take the time to investigate a religion and believe in it, I must be right.  All that investment of time and belief can’t have been wrong.  Perish the thought.  If that religion makes a prophecy – such as the end of the world on a certain date – then I must be right to believe it.  If the world subsequently appears not to have ended on that date, then it must have been our faith, and our actions, that saved the world after all.  Or maybe the world ended in an invisible, but more important way.  The kingdom of heaven has been established within. Either way, how right we were!

It can sometimes be fun to observe the self-delusions of the over-confident.  Psychologists talk about “cognitive dissonance”, when someone’s deeply held beliefs appear to be contradicted by straightforward evidence.  That person is forced to hold two incompatible viewpoints in mind at the same time: I deeply believe X, but I seem to observe not-X.  Most people are troubled by this kind of dissonance.  It’s psychologically uncomfortable.  And because it can be hard for them to give up their underlying self-belief that “If I deeply believe X, I must have good reasons to do so”, it can lead them into outlandish hoops and illogical jumps to deny the straightforward evidence.  For them, rather than “seeing is believing”, the saying becomes inverted: “believing is seeing”.

As I said, it can be fun to see the daft things people have done, to resolve their cognitive dissonance in favour of maintaining their own belief in their own essential soundness, morality, judgement, and/or reasoning.  It can be especial fun to observe the mental gymnastics of people with fundamentalist religious and/or political faith, who refuse to accept plain facts that contradict their certainty.  The same goes for believers in alien abduction, for fan boys of particular mobile operating systems, and for lots more besides.

But this can also be a deadly serious topic:

  • It can result in wrongful imprisonments, with the prosecutors unwilling to face up to the idea that their over-confidence was misplaced.  As a result, people spend many years of their life unjustly incarcerated.
  • It can result in families being shattered under the pressures of false “repressed memories” of childhood abuse, seemingly “recovered” by hypnotists and subsequently passionately believed by the apparent victims.
  • It can split up previously happy couples, who end up being besotted, not with each other, but with dreadful ideas about each other (even though “there’s always two sides to a story”).
  • Perhaps worst of all, it can result in generations-long feuds and wars – such as the disastrous entrenched enmity of the Middle East – with each side staunchly holding onto the view “we’re the good guys, and anything we did to these other guys was justified”.

Above, I’ve retold some of the thoughts that occurred to me as I recently listened to the book “Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts”, by veteran social psychologists Carol Tavris and Elliot Aronson.  (See here for this book’s website.)  At first, I found the book to be a very pleasant intellectual voyage.  It described, time and again, experimental research that should undermine anyone’s over-confidence about their abilities to observe, remember, and reason.  (I’ll come back to that research in a moment).  It reviewed real-life examples of cognitive dissonance – both personal examples and well-known historical examples.  So far, so good.  But later chapters made me more and more serious – and, frankly, more and more angry – as they explored horrific examples of miscarriages of justice (the miscarriage being subsequently demonstrated by the likes of DNA evidence), family breakups, and escalating conflicts and internecine violence.  All of this stemmed from faulty reasoning, brought on by self-justification (I’m not the kind of person who could make that kind of mistake) and by over-confidence in our own thinking skills.

Some of the same ground is covered in another recent book, “The invisible gorilla – and other ways our intuition deceives us”, by Christopher Chabris and Daniel Simons.  (See here for the website accompanying this book.)  The gorilla in the title refers to the celebrated experiment where viewers are asked to concentrate on one set of activity – counting the number of passes made by a group of basketball players – and often totally fail to notice someone in a gorilla suit wandering through the crowd of players.  Gorilla?  What gorilla?  Don’t be stupid!  If there had been a gorilla there, I would have seen it, sure as eggs are eggs.

Chapter by chapter, “The invisible gorilla” reviews evidence that we tend to be over-confident in our own abilities to observe, remember, and reason.  The chapters cover:

  • Our bias to think we would surely observe anything large and important that happened
  • Our bias to think our memories are reliable
  • Our bias to think that people who express themselves confidently are more likely to be trustworthy
  • Our bias to think that we would give equal weight to evidence that contradicts our beliefs, as to evidence that supports our beliefs (the reality is that we search high and low for confirming evidence, and quickly jump to reasons to justify ignoring disconfirming evidence)
  • Our bias to think that correlation implies causation: that if event A is often followed by event B, then A will be the cause of B
  • Our bias to think there are quick fixes that will allow significant improvements in our thinking power – such as playing classical music to babies (an effect that has been systematically discredited)
  • Our bias to think we can do many things simultaneously (“multi-task”) without any individual task being affected detrimentally.

These biases probably all were useful to Homo sapiens at an early phase of our evolutionary history.  But in the complex society of the present day, these biases do us more harm than good.

Added together, the two books provide sobering material about our cognitive biases, and about the damage that all too often follows from us being unaware of these biases.

“Mistakes were made (but not by me)” adds the further insight that we tend to descend gradually into a state of gross over-confidence.  The book frequently refers to the metaphor of a pyramid.  Before we make a strong commitment, we are often open-minded.  We could go in several different directions.  But once we start down any of the faces in the pyramid, it becomes harder and harder to retract – and we move further away from people who, initially, were in the very same undecided state as us.  The more we follow a course of action, the greater our commitment to defend all the time and energy we’ve committed down that path.  I can’t have taken a wrong decision, because if I had, I would have wasted all that time and energy, and that’s not the kind of person I am. So they invest even more time and energy, walking yet further down that pyramid of over-confidence, in order to maintain their own self-image.

At root, what’s going wrong here is what psychologists call self-justification.  Once upon a time, the word pride would have been used.  We can’t bear to realise that our own self-image is at fault, so we continue to take actions – often harmful actions – in support of our self-image.

The final chapters of both books offer hope.  They give examples of people who are able to break out of this spiral of self-justification.  It isn’t easy.

An important conclusion is that we should put greater focus on educating people about cognitive biases.  Knowing about a cognitive bias doesn’t make us immune to it, but it does help – especially when we are still only a few rungs down the face of the pyramid.  As stated in the conclusion of “The invisible gorilla”:

One of our messages in this book is indeed negative: Be wary of your intuitions, especially intuitions about how your own mind works.  Our mental systems for rapid cognition excel at solving the problems they evolved to solve, but our cultures, societies, and technologies today are much more complex than those of our ancestors.  In many cases, intuition is poorly adapted to solving problems in the modern world.  Think twice before you decide to trust intuition over rational analysis, especially in important matters, and watch out for people who tell you intuition can be a panacea for decision-making ills…

But we also have an affirmative message to leave you with.  You can make better decisions, and maybe even get a better life, if you do your best to look for the invisible gorillas in the world around you…  There may be important things right in front of you that you aren’t noticing due to the illusion of attention.  Now that you know about this illusion, you’ll be less apt to assume you’re seeing everything there is to see.  You may think you remember some things much better than you really do, because of the illusion of memory.  Now that you understand this illusion, your trust your own memories, and that of others, a bit less, and you’ll try to corroborate your memory in important situations.  You’ll recognise that the confidence people express often reflects their personalities rather than their knowledge, memory, or abilities…  You’ll be skeptical of claims that simple tricks can unleash the untapped potential in your mind, but you’ll be aware than you can develop phenomenal levels of expertise if you study and practice the right way.

Similarly, we should also take more care to widely explain the benefits of the scientific approach, which searches for disconfirming evidence as must as it searches for confirming evidence.

That’s the pro-reason approach to encouraging better reasoning.  But reason, by itself, often isn’t enough.  If we are going to face up to the fact that we’ve made grave errors of judgement, which have caused pain, injustice, and sometimes even death and destruction, we frequently need powerful emotional support.  To enable us to admit to ourselves that we’ve made major mistakes, it greatly helps if we can find another image of ourselves, which sees us as making better contributions in the future.  That’s the pro-hope approach to encouraging better reasoning.  The two books have examples of each approach.  Both books are well worth reading.  At the very least, you may get some new insight as to why discussions on Internet forums often descend into people seemingly talking past each other, or why formerly friendly colleagues can get stuck into an unhelpful rut of deeply disliking each other.

13 September 2010

Accelerating Nokia’s renewal

Filed under: leadership, Nokia, openness, software management, time to market, urgency, usability — David Wood @ 8:29 pm

“The time is right to accelerate the company’s renewal” – Jorma Ollila, Chairman of the Nokia Board of Directors, 10 Sept 2010

I’ve been a keen Nokia watcher since late 1996, when a group of senior managers from Nokia visited Psion’s offices in Sentinel House, near Edgware Road station in London.  These managers were looking for a mobile operating system to power new generations of devices that would in time come to be called smartphones.

From my observations, I fairly soon realised that Nokia had world-class operating practice.  At the time, they were “one of the big three” – along with Motorola and Ericsson.  These three companies had roughly the same mobile phone market share – sometimes with one doing a little better, sometimes with another doing a little better.  But the practices I was able to watch at close quarters, over more than a decade, drove Nokia’s position ever higher.  People stopped talking about “the big three” and recognised Nokia as being in a league of its own.

In recent times, of course, this market dominance has taken a major hit.  Unit volume sales continue high, but the proportion of mobile industry profits won by Nokia has seen significant decline, in the face of new competition.  It’s no surprise that Nokia’s Chairman, Jorma Ollila, has declared the need to accelerate the company’s renewal.

Following the dramatic appointment earlier this week of a new CEO, Stephen Elop, I’ve already been asked on many occasions what advice I would offer the new CEO.  Here’s what I would say:

1. Ensure faster software execution – by improving software process quality

Delays in Nokia’s releases – both platform releases and product releases – mean that market windows are missed.  Nokia’s lengthy release lifecycles compare poorly to what more nimble competitors are achieving.

Paradoxically, the way to achieve faster release cycles is not to focus on faster release cycles.  The best way to ensure customer satisfaction and predictable delivery, is, counter-intuitively, to focus more on software quality, interim customer feedback, agile project management, self-motivated teams, and general principles of excellence in software development, than on schedule management itself.

It’s in line with what software process expert Steve McConnell says,

  • IBM discovered 20 years ago that projects that focused on attaining the shortest schedules had high frequencies of cost and schedule overruns;
  • Projects that focused on achieving high quality had the best schedules and the highest productivities.

The experience of Symbian Software Ltd over many years bears out the same conclusion. The more we in Symbian Ltd focused on achieving high quality, the better we became with both schedule management and internal developer productivity.

Aside: see this previous blogpost for the argument that

In a company whose culture puts a strong emphasis upon fulfilling commitments and never missing deadlines, the agreed schedules are often built from estimations up to twice as long as the individually most likely outcome, and even so, they often miss even these extended deadlines…

2. Recognise that quality trumps quantity

Large product development teams risk falling foul of Brooks’s Law: Adding manpower to a late software project makes it later.  In other words, too many cooks spoil the broth.  Each new person, or each new team, introduces new relationships that need to be navigated and managed.  More and more effort ends up in communications and bureaucracy, rather than in “real work”.

Large product development teams can also suffer from a diminution of individual quality.  This is summed up in the saying,

A-grade people hire A-grade people to work for them, but B-grade people hire C-grade people to work for them.

Related to this, in large organisations, is the Peter Principle:

In a hierarchy every employee tends to rise to their level of incompetence.

Former Nokia executive Juhani Risku recently gave a lengthy interview to The Register.  Andrew Orlowski noted the following:

One phrase repeatedly came up in our conversation: The Peter Principle. This is the rule by which people are promoted to their own level of incompetence. Many, but not all of Nokia’s executives have attained this goal, claims Risku.

One thing that does seem to be true is that Nokia’s product development teams are larger than comparable teams in other companies.  Nokia’s new CEO needs to ensure that the organisation is simplified and make more effective.  However, in the process, he should seek to retain the true world-class performers and teams in the company he is inheriting.  This will require wise discrimination – and an inspired choice of trusted advisors.

3. Identify and enable people with powerful product vision

A mediocre product delivered quickly is better than a mediocre product delivered late.  But even better is when the development process results in a product with great user appeal.

The principle of “less is more” applies here.  A product that delivers 50% of the functionality, superbly implemented, is likely to outsell a product that has 100% of the functionality but a whole cluster of usability issues.  (And the former product will almost certainly generate better public reaction.)

That’s why a relentless focus on product design is needed.  Companies like RIM and Apple have powerful product designers who are able to articulate and then boldly defend their conceptions for appealing new products – all the way through to these products reaching the market.  Although these great designers are sensitive to feedback from users, they don’t allow their core product vision to be diluted by numerous “nice ideas” that complicate the execution of the core tasks.

Nokia’s new CEO needs to identify individuals (from either inside or outside the existing organisation) who can carry out this task for Nokia’s new products.  Then he needs to enable these individuals to succeed.

For a compelling account of how Jeff Hawkins acted with this kind single-minded focus on a “simply great product” at Palm, I recommend the book “Piloting Palm: The Inside Story of Palm, Handspring and the Birth of the Billion Dollar Handheld Industry” by Andrea Butter and David Pogue.

4. Build the absorptive capacity that will allow Nokia to benefit from openness

Nokia has often talked about Open Innovation, and has made strong bets in favour of open source.  However, it appears that it has gained comparatively little from these bets so far.

In order to benefit more fully from contributions from external developers, Nokia needs to build additional absorptive capacity into its engineering teams and processes.  Otherwise, there’s little point in continuing down the route of “openness”.  However, with the absorptive capacity in place, the underlying platforms used by Nokia should start accelerating their development – benefiting the entire community (including Nokia).

For more on some of the skills needed, see my article Open Source: necessary but not sufficient.

5. Avoid rash decisions – first, find out what is really happening

I would advise Nokia’s new CEO to urgently bring in expert software process consultants, to conduct an audit of both the strengths and the weaknesses of Nokia’s practices in software development.

To determine which teams really are performing well, and which are performing poorly, it’s not sufficient to rely on any general principle or hearsay.  Instead, I recommend the Lean principle of Genba, Genbutsu, Genjitsu:

Genba means the actual place
Genbutsu means the real thing, the actual thing
Genjitsu means the actual situation

Or, colloquially translated:

Go and see
Get the facts
Grasp the situation

6. Address the Knowing-Doing Gap

The advice I offer above is far from being alien to Nokia.  I am sure there are scores of senior managers inside Nokia who already know and appreciate the above principles.  The deeper problem is one of a “knowing doing gap”.

I’ve written on this topic before.  For now, I’ll just state the conclusion:

The following set of five characteristics distinguish companies that can successfully bridge the knowing-doing gap:

  1. They have leaders with a profound hands-on knowledge of the work domain;
  2. They have a bias for plain language and simple concepts;
  3. They encourage solutions rather than inaction, by framing questions asking “how”, not just “why”;
  4. They have strong mechanisms that close the loop – ensuring that actions are completed (rather than being forgotten, or excuses being accepted);
  5. They are not afraid to “learn by doing”, and thereby avoid analysis paralysis.

Happily for Nokia, Stephen Elop’s background seems to indicate that he will score well on these criteria.

« Newer PostsOlder Posts »

Blog at WordPress.com.