dw2

4 October 2019

A Silicon Valley centred view of the prehistory of smartphones

Filed under: films, Psion, smartphones, Smartphones and beyond, Symbian — Tags: , , — David Wood @ 7:11 am

The first thing to say about the film General Magic (official site, IMDb) is that you should watch it.

The film is available on iTunes, and on Amazon Prime, and from lots of other places too.

It tracks the rise and fall of the company with the same name as the film – General Magic – and the impact of the people involved in the subsequent rise of the smartphone industry.

Here’s the trailer:

General Magic was conceived inside Apple in 1989, and, as reported at the time by the New York Times, was spun out as a separate entity in 1990:

Three well-known technologists from Apple Computer Inc., including perhaps its most distinguished programmer, Bill Atkinson, are forming a new company.

Mr. Atkinson and Marc Porat, another Apple researcher, are leaving Apple to form General Magic Inc. They will be joined by Andy Hertzfeld, who designed much of the operating system of the Macintosh computer in the early 1980’s but who has not been with Apple for six years.

The company, which will be based in Mountain View, Calif., will make products known as ”personal intelligent communicators.” While the company would not elaborate, industry analysts believe this refers to handheld devices that can store appointments and other information and transmit and receive information, either over telephone lines or over the airwaves…

Mr. Atkinson, 39 years old, has been with Apple for 12 years. He is best known for developing Hypercard, a program included with every Macintosh that allows users to organize information on computerized notecards…

Dr. Porat, 42, who will be president of General Magic, came to Apple in 1988 and was manager of business development in the advanced technology group.

Much of the vision of the company came from Marc Porat, the company’s first CEO. The film quotes from a visionary email Marc Porat had written in 1990 to John Sculley, at the time Apple’s CEO, about the kinds of devices their platform would enable:

A tiny computer, a phone, a very personal object… It must be beautiful. It must offer the kind of personal satisfaction that a fine piece of jewelry brings. It will have a perceived value even when it’s not being used. It will offer the comfort of a touchstone, the tactile satisfaction of a seashell, the enchantment of a crystal. Once you use it you won’t be able to live without it.

The film also shows a large book of design ideas, dating (it said) back to the same formative era. Here are a couple of sketches from the book:

(the name given to the concept device in this sketch is “remotaphonputer”), and

General Magic operated in stealth mode until 1993. By that time, many of Apple’s key employees had transferred to work there, all inspired by the vision of designing a hardware and software platform for handheld “personal intelligent communicators”. Also by that time, the company had assembled a formidable collection of investors, including AT&T, Sony, Motorola, Philips, and Panasonic. These backers were joined in due course by British Telecom, Cable & Wireless, France Telecom, Fujitsu, Mitsubishi, NTT DoCoMo, Nortel, Sanyo, and Toshiba. All these companies provided a senior executive to what was known as the “Founding Partner’s Council”, and backed General Magic with a financial stake of up to $6M each.

One powerful feature of the film is the interweaving of lots of archival documentary footage, shot during the company’s formative period by Sarah Kerruish. That shows, for example, a young Megan Smith saying that, one day, the technology would fit onto a device as small as a “Dick Tracy wristwatch”. Smith later served under Barack Obama as the USA’s Chief Technology Officer. As it happens, another young employee at General Magic, Kevin Lynch, went on to lead the Apple Watch project. And that’s only the start of the list of stellar accomplishments which lay ahead for one-time General Magic employees. As the film points out, around 98% of the present day smartphone market can be traced to efforts of two people who sat close to each other in the General Magic workspace: Andy Rubin, the designer of Android, and Tony Fadell, who is credited as “father of the iPod” and “co-inventor of the iPhone”. Rubin is mainly missing from the movie, but Fadell appears regularly, speaking with great passion.

With the aid of Goldman Sachs, General Magic IPO’ed in February 1995, in a huge publicity wave. The company’s stock price promptly doubled.

However, the company was already facing many issues. I touched on these in a short section in my own 2014 book Smartphones and Beyond, in the chapter entitled “Die like IBM, or die like Apple”. That chapter referred to various ideas contemplated by Psion in the mid 1990s as its software team laboured to create what would later be known as Symbian OS – software initially targeted for a device code-named “Protea” (this would reach the market in 1997 as the Psion Series 5):

Psion’s confidence about the prospects for its forthcoming 32-bit software system (the future Symbian OS), that was so high when serious coding had started on that system in late 1994, had grown considerably more tentative by the first half of 1996. One reason was the repeated delays in the development project, as mentioned in the previous chapter. But another reason was the changing competitive landscape.

Mounting competition

As the Protea project zigzagged forwards, sideways, and sometimes backwards, with uncertain and seemingly unknowable end date, Psion’s senior management wondered from time to time whether a different software system, obtained from outside the company, might prove a better bet for future mobile products.

For example, there was a period of around a week when senior management were enthralled by the “Magic Cap” system from a Californian company with the audacious name “General Magic”. General Magic had been spun out of Apple in 1990…

Partners and investors for General Magic included Sony, Motorola, AT&T, Philips, Matsushita, and British Telecom. A powerful buzz about the company’s future meant that its stock price doubled on the first day of its IPO in February 1995. It was therefore understandable that Psion senior managers would consider joining the General Magic party, and licence Magic Cap for use in their PDAs. After all, one of them whispered, think of the cost savings from not needing to maintain such a large in-house team of Psion’s own software developers. How much simpler to utilise ready-made software, created by the same team that had achieved such marvels in their earlier careers elsewhere in Silicon Valley! And how cute the Magic Cap software seemed, with its real-world metaphors and winsome bouncing rabbit.

That particular fancy soon evaporated. The Magic Cap software might appear cute, but closer examination revealed shallowness (weak functionality) in practice. The devices brought to market – by Sony and Motorola – were pale shadows of what the General Magic marketing machine had previously led people to expect. In contrast, Psion could see the strength in depth baked into the developing 32-bit Epoc software system. Psion’s development team escaped this particular axe.

(See here for a longer excerpt from that chapter.)

Total sales of the two devices running General Magic’s software were a paltry 3,000 units. The devices fell a long way short of the vision, and had few redeeming features. The company started a brutal downward slide. Investors were left high and dry. The post-IPO stock price of $26 per share had fallen to $1.38 by 1999.

The film highlights a major learning: the way to implement a grand vision is via a series of incremental steps. Don’t try to fit every desired innovation into a single release of a product. Do it in stages, with good quality throughout. That’s a lesson which Tony Fadell took with him from General Magic to Apple in later life, where he oversaw regular increments to the functionality of the iPod, which in time laid the foundation for a similar set of regular increments in the functionality of the iPhone.

What the film emphasises less is the difficulty posed to the company by its wide set of powerful investors and their divergent interests. The governance problems of General Magic were high in the minds of the executives from Ericsson and Nokia who visited Psion’s offices in central London in April 1998 to discuss the potential formation of the Symbian joint venture. With the approval of a team from Nokia that included Mikko Terho and Juha Putkiranta, Ericsson’s Anders Wästerlid included the following points in a set of guiding principles:

Avoid the structure of General Magic

Need to be able to act fast

Need to learn how to deal effectively with conflicts within the group of owners

Yes, Ericsson and Nokia wanted other companies to become involved with the joint venture, in due course. However, they offered this practical observation:

The more people who are in the boat, the tougher it is to start. But it’s easy for more people to jump in once the boat is moving.

(That meeting, as well as many other steps in the formation of Symbian, are covered in a later chapter of my book, “Death Star or Nova”.)

To its credit, the film highlights one more way in which the vision of General Magic failed to anticipate market development: lack of appreciation of the forthcoming importance of the worldwide web. The services accessed on General Magic devices would be provided by the network operator, such as AT&T. It was an intern who, apparently, first drew this omission to the attention of the General Magic leadership.

Where the film does less well is in the implication running nearly all the way through, that the work of General Magic laid a uniquely important basis for what smartphones subsequently became. One commentator states, “Without General Magic, there could never have been Android”.

In this regard, the film provides an overly Silicon Valley centred view of the prehistory of today’s smartphones.

Here’s just some of what’s missing from that view, and from what General Magic was trying to accomplish:

  • The emergence (as just mentioned) of the web
  • Push technology, pioneered by BlackBerry RIM
  • The devices in Japan running on NTT DoCoMo’s network, with their rich ecosystem of iMode apps and services
  • The devices running Brew services on Qualcomm phones
  • Simple PC connectivity, as pioneered by Palm
  • Access to enterprise services, led by Microsoft’s handheld computers
  • Nokia’s first communicator, launched in 1996, running software from GeoWorks
  • The first device marketed as a smartphone, the GS 88 launched by Ericsson in 1997, also running GeoWorks software.

Last, but not least, I am bound to mention the very considerable thinking that took place at Psion, from the early 1980s onwards. When I started work at Psion as a software engineer in June 1988, I discovered that a huge amount of design had already taken place for what would eventually become the Psion Series 3 communicator. That design was an iteration on what Psion had learned in a number of earlier projects, including two generations of handheld organiser products. On the launch of the Organiser in 1984, Psion had declared the device to be “The world’s first practical pocket computer”. This phrase headlined a magazine promotion which can be found, along with lots of other useful archive material, on Eddie Slupski’s ‘Bioeddie’ website. The magazine article went on: “The Psion Organiser will change the way you work.” It was a prescient claim.

(For more about these early design ideas at Psion, see, you guessed it, another chapter from my book, “Before the beginning”. For the causes of Psion’s eventual departure from the consumer handheld space, see later chapters of the same book.)

It’s often said that history gets to be written by the victors. The world’s most successful smartphones, by far, are from two Silicon Valley companies, Apple and Google. Therefore Silicon Valley insiders have the right to emphasise the flow of personnel and ideas from General Magic to these current platforms. Indeed, it’s a fascinating story.

However, my own view is that one dimensional accounts of history – however absorbing – are likely to mislead. The best products and services are able to integrate insights and contributions from multiple diverse backgrounds.

10 February 2013

Fixing bugs in minds and bugs in societies

Suppose we notice what appears to be bugs in our thinking processes. Should we try to fix these bugs?

Or how about bugs in the way society works? Should we try to fix these bugs too?

As examples of bugs of the first kind, I return to a book I reviewed some time ago, “Kluge: The Haphazard Construction of the Human Mind”. I entitled my review “The human mind as a flawed creation of nature”, and I still stick by that description. In that review, I pulled out the following quote from near to the end of the book:

In this book, we’ve discussed several bugs in our cognitive makeup: confirmation bias, mental contamination, anchoring, framing, inadequate self-control, the ruminative cycle, the focussing illusion, motivated reasoning, and false memory, not to mention absent-mindedness, an ambiguous linguistic system, and vulnerability to mental disorders. Our memory, contextually driven as it is, is ill suited to many of the demands of modern life, and our self-control mechanisms are almost hopelessly split. Our ancestral mechanisms were shaped in a different world, and our more modern deliberative mechanisms can’t shake the influence of that past. In every domain we have considered, from memory to belief, choice, language, and pleasure, we have seen that a mind built largely through the progressive overlay of technologies is far from perfect…

These bugs in our mental makeup are far from being harmless quirks or curiosities. They can lead us:

  • to overly trust people who have visual trappings of authority,
  • to fail to make adequate provision for our own futures,
  • to keep throwing money into bad investments,
  • and to jump to all kinds of dangerous premature conclusions.

But should we try to fix these bugs?

The field where the term ‘bug’ was first used in this sense of a mistake, software engineering, provides many cautionary tales of bug fixing going wrong:

  • Sometimes what appears to be a ‘bug’ in a piece of software turns out to be a useful ‘feature’, with a good purpose after all
  • Sometimes a fix introduces unexpected side-effects, which are worse than the bug which was fixed.

I shared an example of the second kind in the “Managing defects” chapter of the book I wrote in 2004-5, “Symbian for software leaders: principles of successful smartphone development projects”:

An embarrassing moment with defects

The first million-selling product that I helped to build was the Psion Series 3a handheld computer. This was designed as a distinct evolutionary step-up from its predecessor, the original Series 3 (often called the “Psion 3 classic” in retrospect)…

At last the day came (several weeks late, as it happened) to ship the software to Japan, where it would be flashed into large numbers of chips ready to assemble into production Series 3a devices. It was ROM version 3.20. No sooner was it sent than panic set into the development team. Two of us had independently noticed a new defect in the agenda application. If a user set an alarm on a repeating entry, and then adjusted the time of this entry, in some circumstances the alarm would fail to ring. We reasoned that this was a really bad defect – after all, two of us had independently found it.

The engineer who had written the engine for the application – the part dealing with all data manipulation algorithms, including calculating alarm times – studied his code, and came up with a fix. We were hesitant, since it was complex code. So we performed a mass code review: lots of the best brains in the team talked through the details of the fix. After twenty four hours, we decided the fix was good. So we recalled 3.20, and released 3.21 in its place. To our relief, no chips were lost in the process: the flashing had not yet started.

Following standard practice, we upgraded the prototype devices of everyone in the development team, to run 3.21. As we waited for the chips to return, we kept using our devices – continuing (in the jargon of the team) to “eat our own dog food”. Strangely, there were a few new puzzling problems with alarms on entries. Actually, it soon became clear these problems were a lot worse than the problem that had just been fixed. As we diagnosed these new problems, a sinking feeling grew. Despite our intense care (but probably because of the intense pressure) we had failed to fully consider all the routes through the agenda engine code; the change made for 3.21 was actually a regression on previous behaviour.

Once again, we made a phone call to Japan. This time, we were too late to prevent some tens of thousands of wasted chips. We put the agenda engine code back to its previous state, and decided that was good enough! (Because of some other minor changes, the shipping version number was incremented to 3.22.) We decided to live with this one defect, in order not to hold up production any longer.

We were expecting to hear more news about this particular defect from the Psion technical support teams, but the call never came. This defect never featured on the list of defects reported by end users. In retrospect, we had been misled by the fact that two of us had independently found this defect during the final test phase: this distorted our priority call…

That was an expensive mistake, which seared a cautionary attitude into my own brain, regarding the dangers of last-minute changes to complex software. All seasoned software engineers have similar tales they can tell, from their own experience.

If attempts to fix defects in software are often counter-productive, how much more dangerous are attempts to fix defects in our thinking processes – or defects in how our societies operate! At least in the first case, we generally still have access to the source code, and to the design intention of the original software authors. For the other examples, the long evolutionary history that led to particular designs is something at which we can only guess. It would be like trying to fix a software bug, that somehow results from the combination of many millions of lines of source code, written decades ago by people who left no documentation and who are not available for consultation.

What I’ve just stated is a version of an argument that conservative-minded thinkers often give, against attempts to try to conduct “social engineering” or “improve on nature”. Tinkering with ages-old thinking processes – or with structures within societies – carries the risk that we fail to appreciate many hidden connections. Therefore (the argument runs) we should desist from any such experimentation.

Versions of this argument appeared, from two different commentators, in responses to my previous blogpost. One put it like this:

The trouble is that ‘cognitive biases and engrained mistakes’ may appear dysfunctional but they are, in fact, evolutionarily successful adaptations of humanity to its highly complex environment. These, including prejudice, provide highly effective means for the resolution of really existing problems in human capacity…

Rational policies to deal with human and social complexity have almost invariably been proved to be inhumane and brutal, fine for the theoretician in the British Library, but dreadful in the field.

Another continued the theme:

I have much sympathy for [the] point about “cognitive biases and engrained mistakes”. The belief that one has identified cognitive bias in another or has liberated oneself from such can be a “Fatal Conceit,” to borrow a phrase from Hayek, and has indeed not infrequently given rise to inhumane treatment even of whole populations. One of my favourite sayings is David Hume’s “the rules of morality are not conclusions of our reason,” which is at the heart of Hayek’s Fatal Conceit argument.

But the conclusion I draw is different. I don’t conclude, “Never try to fix bugs”. After all, the very next sentence from my chapter on “Managing defects” stated, “We eventually produced a proper fix several months later”. Indeed, many bugs do demand urgent fixes. Instead, my conclusion is that bug fixing in complex systems needs a great deal of careful thought, including cautious experimentation, data analysis, and peer review.

The analogy can be taken one more step. Suppose that a software engineer has a bad track record in his or her defect fixes. Despite claiming, each time, to be exercising care and attention, the results speak differently: the fixes usually make things worse. Suppose, further, that this software engineer comes from a particular company, and that fixes from that company have the same poor track record. (To make this more vivid, the name of this company might be “Technocratic solutions” or “Socialista” or “Utopia software”. You can probably see where this argument is going…) That would be a reason for especial discomfort if someone new from that company is submitting code changes in attempts to fix a given bug.

Well, something similar happens in the field of social change. History has shown, in many cases, that attempts at mental engineering and social engineering were counter-productive. For that reason, many conservatives support various “precautionary principles”. They are especially fearful of any social changes proposed by people they can tar with labels such as “technocratic” or”socialist” or “utopian”.

These precautionary principles presuppose that the ‘cure’ will be worse than the ‘disease’. However, I personally have greater confidence in the fast improving power of new fields of science, including the fields that study our mind and brain. These improvements are placing ever greater understanding in our hands – and hence, ever greater power to fix bugs without introducing nasty side-effects.

For these reasons, I do look forward (as I said in my previous posting) to these improvements

helping individuals and societies rise above cognitive biases and engrained mistakes in reasoning… and accelerating a reformation of the political and economic environment, so that the outcomes that are rationally best are pursued, instead of those which are expedient and profitable for the people who currently possess the most power and influence.

Finally, let me offer some thoughts on the observation that “the rules of morality are not conclusions of our reason”. That observation is vividly supported by the disturbing “moral dumbfounding” examples discussed by Jonathan Haidt in his excellent book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” (which I briefly reviewed here). But does that observation mean that we should stop trying to reason with people about moral choices?

MoralLandscapeHere, I’ll adapt comments from my review of “The Moral Landscape: How Science Can Determine Human Values”, by Sam Harris.

That book considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion. The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith. It is a divine being that tells us, directly or indirectly, the difference between good and evil. There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy. It is by abstract reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method. Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being. Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil. That’s how we can evaluate, for example, the Taliban’s views on girls’ education.

As Harris makes clear, this is far from being an abstract, other-worldly discussion. Cultures are clashing all the time, with lots of dramatic consequences for human well-being. Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)? And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections?

In the light of these questions, here are some arguments from Harris’s book that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.” That’s the conversation we need to progress.

PS I’ve written more about cognitive biases and cognitive dissonance – and how we can transcend these mistakes – in my blogpost “Our own entrenched enemies of reason”.

22 December 2012

Symbian retrospective: hits and misses

Filed under: More Than Smartphones, Nokia, Psion, retrospection, Symbian, Symbian Story — David Wood @ 12:19 pm

As another calendar year draws to a close, it’s timely to reflect on recent “hits” and “misses” – what went well, and what went less well.

In my case, I’m in the midst of a much longer reflection process, surveying not just the past calendar year, but the entire history (and pre-history) of Symbian – the company that played a significant role in kick-starting the smartphone phenomenon, well before anyone had ever heard of “iPhone” or “Android”. I’m channeling my thoughts into a new book that I’m in the midst of writing, “More than smartphones”. The working subtitle is “Learning from Symbian…”

I’ve got no shortage of source material to draw on – including notes in my electronic diary that go all the way back to January 1992. As I note in my current draft of the introductory chapter,

My analysis draws on an extensive set of notes I’ve taken throughout two decades of leadership positions in and around Symbian – including many notes written in the various Psion PDA organisers that have been my constant electronic companions over these years. These Psion devices have been close to my heart, in more than one sense.

Indeed, the story of Symbian is deeply linked with that of Psion, its original parent. Psion and Symbian were both headquartered in London and shared many of the same personnel…

The PDAs that Psion brought to market in the 1980s and 1990s were the mobile game-changers of their day, generating (albeit on a smaller scale) the same kind of industry buzz as would later manifest around new smartphone releases. Psion PDAs were also the precursors for much of the functionality that subsequently re-emerged in smartphones, satellite navigation products, and other smart mobile devices.

My own Psion electronic diary possibly ranks among the longest continuously maintained personal electronic agendas in the world. The oldest entry in it is at 2.30pm on Friday 31st January, 1992. That entry reads “Swedes+Danes Frampton St”. Therein lies a tale.

At that time, Psion’s commercial departments were located in a building in Frampton Street, in central London, roughly midway between the Edgware Road and Maida Vale tube stations. Psion’s technical teams were located in premises in Harcourt Street, about 15 minutes distance by walking. In 1992, the Psion Series 3a PDA was in an early stage of development, and I was trialling its new Agenda application – an application whose UI and rich set of views were being built by a team under my direction. In parallel, discussions were proceeding with representatives from several overseas distributors and partners, about the process to create versions of Psion PDAs for different languages: German, French, Italian, Spanish… and Swedish and Danish…

As the person who assembled and integrated all the files for different software versions, I met the leads of the teams doing the various translations. That day, 31st January 1992, more than 20 years ago, was among my first meetings with work professionals from the Nordic countries.

I recall that we discussed features such as keyboards that would cater for the additional characters of the Danish and Swedish alphabets, like ‘å’ and ‘ø’. I had no inkling in 1992 that professionals from Denmark, Sweden, and Finland (including employees of mobile phone juggernauts Ericsson and Nokia) would come to have such a far-reaching influence on the evolution of the software which was at that time being designed for the Series 3a. Nor could I foresee the subsequent 20 year evolution of my electronic agenda file:

  • Through numerous pieces of Series 3a hardware
  • Via the Series 3c successor to the Series 3a, with its incrementally improved hardware and software systems
  • Via a one-time migration process to a new data format, for the 32-bit Series 5, which could cope with much larger applications, and with much larger data files (the Series 3 family used a 16-bit architecture)
  • Into the Series 5mx successor of the Series 5
  • Through numerous pieces of Series 5mx hardware – all of which give (in their “About” screen) 1999 as the year of their creation; when one piece of hardware ceases to work, because, say, of problems with the screen display or the hinge mechanism, I transfer the data onto another in my possession…

Why 1999 is the end of this particular run of changes is a fascinating tale in its own right. It’s just one of many fascinating tales that surround the changing fortunes of the players in the Symbian story…

Step forwards from chapter one to the penultimate chapter, “Symbian retrospective”. This is where I’d welcome some extra input from readers of this blog, to complement and refine my own thinking.

This is the first of two retrospective chapters that draw conclusions from the episodes explored in preceding pages. In this chapter, I look at the following questions:

  • Out of all the choices over the years made by the players at the heart of the Symbian world, which ones were the most significant?
  • Of these choices, which were the greatest hits, and which the greatest misses?
  • With the advantage of hindsight, what are the different options that could credibly have been pursued which would have had the greatest impact on Symbian success or failure?

So far, my preliminary outline for that chapter lists a total of twenty hits and misses. Some examples of the hits:

  • Create Symbian with a commercial basis (not a “customers’ cooperative”)
  • Support from multiple key long-term investors (especially Nokia)
  • Enable significant differentiation (including network operator customisation)
  • Focus on performance and stability

And some examples of the misses:

  • Failure to appreciate the importance of the mobile web browser
  • Tolerating Symbian platform fragmentation
  • Failure to provide a CDMA solution
  • Failure to merge Nokia S60 and Symbian

My question for readers of this blogpost is: What would be in your list (say, 1-3 items) of the top hits and misses of decisions made by Symbian?

Footnote: Please accept some delays in your comments appearing. WordPress may hold them in a queue awaiting my review and approval. But I’m in a part of the world with great natural beauty and solitude, where the tour guides request that we all leave our wireless communication devices behind on the ship when we land for the daily excursions. Normally I would have balked at that very idea, but there are times and places when multi-tasking has to stop!

9 April 2012

Six weeks without Symbian

Filed under: Accenture, Android, Apple, applications, Psion, Samsung, smartphones, Symbian, UIQ — David Wood @ 10:58 am

It’s only six weeks, but in some ways, it feels like six months. That’s how much time has passed since I’ve used a Symbian phone.

These six weeks separate me from nearly thirteen years of reliance on a long series of different Symbian phones. It was mid-1999 when prototype Ericsson R380 smartphones became stable enough for me to start using as my regular mobile phone. Since then, I’ve been carrying Symbian-powered smartphones with me at all times. That’s thirteen years of close interaction with various Symbian-powered devices from Nokia, Ericsson (subsequently Sony Ericsson), and Samsung – interspersed with shorter periods of using Symbian-powered devices from Panasonic, Siemens, Fujitsu, Sendo, Motorola, and LG.

On occasion over these years, I experimented with devices running other operating systems, but my current Symbian device was never far away, and remained my primary personal communication device. These non-Symbian devices always left me feeling underwhelmed – too much functionality was missing, or was served up in what seemed sub-optimal ways, compared to what I had learned to expect.

But ahead of this year’s Mobile World Congress in Barcelona, held 27th Feb to 1st Mar, I found three reasons to gain a greater degree of first-hand experience with Android:

  1. I would be meeting representatives of various companies who were conducting significant development projects using Android, and I wished to speak from “practical knowledge” rather than simply from “book knowledge”
  2. Some of my colleagues from Accenture had developed apps for Android devices, that I wanted to be able to demonstrate with confidence, based on my own recurring experience of these apps
  3. One particular Android device – the Samsung Galaxy Note – seemed to me to have the potential to define a disruptive new category of mobile usage, midway between normal smartphones and tablets, with its radically large (5.3″) screen, contained in a device still light enough and small enough to be easily portable in my shirt-top pocket.

I was initially wary about text entry on the Galaxy Note. My previous encounters with Android devices had always left me frustrated when trying to enter data, without the benefits of a QWERTY keyboard (as on my long-favourite Nokia E6 range of devices), or fluid hand-writing recognition (as on the Sony Ericsson P800/P900/P910).

But in the course of a single day, three separate people independently recommended me to look at the SwiftKey text entry add-on for Android. SwiftKey takes advantage of both context and personal history to predict what the user is likely to be typing into a given window on the device. See this BBC News interview and video for a good flavour of what SwiftKey provides. I installed it and have been using it non-stop ever since.

With each passing day, I continue to enjoy using the Galaxy Note, and to benefit from the wide ecosystem of companies who create applications for Android.

Here’s some of what I really like about the device:

  • The huge screen adds to the pleasure of browsing maps (including “street view”), web pages, and other graphic, video, or textual content
  • Time and again, there are Android apps available that tailor the mobile user experience more closely than web-browsing alone can achieve – see some examples on the adjacent screenshot
  • These apps are easy to find, easy to install, and (in general) easy to use
  • Integration with Google services (Mail, Maps, etc) is impressive
  • I’ve grown to appreciate the notification system, the ubiquitous “back” button, and the easy configurability of the device.

On the other hand, I’m still finding lots of niggles, in comparison with devices I’ve used previously:

  • It’s hard to be sure, but it seems likely to me that I get a working network connection on the device less often than on previous (e.g. Nokia) devices. This means for example that, when people try to ring me, it goes through to my voice mail more often than before, even though my phone appears (to my eyes) to be working. I’m finding that I reboot this device more often than previous devices, to re-establish a working network connection
  • I frequently press the “back” button by accident, losing my current context, for example when turning the phone from portrait to landscape; in those moments, I often silently bemoan the lack of a “forward” button
  • The device is not quite capable of one-handed use – that’s probably an inevitable consequence of having such a large screen
  • Although integration with Google services is excellent, integration with Outlook leaves more to be desired – particularly interaction with email notifications of calendar invites. For example, I haven’t found a way of accepting a “this meeting has been cancelled” notification (in a way that removes the entry from my calendar), nor of sending a short note explaining my reason for declining a given meeting invite, along with the decline notification, etc
  • I haven’t gone a single day without needing to recharge the device part-way through. This no doubt reflects my heavy use of the device. It may also reflect my continuing use of the standard Android web browser, whereas on Symbian devices I always quickly switched to using the Opera browser, with its much reduced data transfer protocols (and swifter screen refreshes)
  • Downloaded apps don’t always work as expected – perhaps reflecting the diversity of Android devices, something that developers often remark about, as a cause of extra difficulty in their work.

Perhaps what’s most interesting to me is that I keep on enjoying using the device despite all these niggles. I reason to myself that no device is perfect, and that several of the issues I’ve experienced are problems of success rather than problems of failure. And I continue to take pleasure out of interacting with the device.

This form factor will surely become more and more significant. Up till now, Android has made little market headway with larger tablets, as reported recently by PC World:

Corporations planning tablet purchases next quarter overwhelmingly voted for Apple’s iPad, a research firm said Tuesday [13th March]

Of the 1,000 business IT buyers surveyed last month by ChangeWave Research who said they would purchase tablets for their firms in the coming quarter, 84% named the iPad as an intended selection.

That number was more than ten times the nearest competitor and was a record for Apple.

However, Samsung’s success with the “phablet” form factor (5 million units sold in less than two months) has the potential to redraw the market landscape again. Just as the iPad has impacted people’s use of laptops (something I see every day in my own household), the Galaxy Note and other phablets have the potential to impact people’s use of iPads – and perhaps lots more besides.

Footnote 1: The Galaxy Note is designed for use by an “S Pen Stylus”, as well as by finger. I’ve still to explore the benefits of this Stylus.

Footnote 2: Although I no longer carry a Symbian smartphone with me, I’m still utterly reliant on my Psion Series 5mx PDA, which runs the EPOC Release 5 precursor to Symbian OS. I use it all the time as my primary Agenda, To-do list, and repository of numerous personal word documents and spreadsheets. It also wakens me up every morning.

Footnote 3: If I put on some rosy-eyed glasses, I can see the Samsung Galaxy Note as the fulfilment of the design vision behind the original “UIQ” device family reference design (DFRD) from the early days at Symbian. UIQ was initially targeted (1997-1999, when it was still called “Quartz”) at devices having broadly the same size as today’s Galaxy Note. The idea received lots of ridicule – “who’s going to buy a device as big as that?” – so UIQ morphed into “slim UIQ” that instead targeted devices like the Sony Ericsson P800 mentioned above. Like many a great design vision, UIQ can perhaps be described as “years ahead of its time”.

3 January 2011

Some memorable alarm bugs I have known

Filed under: Apple, Psion, usability — David Wood @ 12:24 pm

Here’s how the BBC website broke the news:

iPhone alarms hit by New Year glitch

A glitch on Apple’s iPhone has stopped its built-in alarm clock going off, leaving many people oversleeping on the first two days of the New Year.

Angry bloggers and tweeters complained that they had been late for work, and were risking missing planes and trains.

My first reaction was incredulity.  How could such a first class software engineering company like Apple get such basic functionality wrong?

I remember being carefully instructed, during my early days as a young software engineer with PDA pioneer Psion, that alarms were paramount.  Whatever else your mobile device might be doing at the time – however busy or full or distracted it might be – alarms had to go off when they became due.  Users were depending on them!

For example, even if the battery was too low, when the time came, to power the audio clip that a user had selected for an alarm, Psion’s EPOC operating system would default to a rasping sound that could be played with less voltage, but which was still loud enough that the user would notice.

Further, the startup sequence of a Psion device would take care to pre-allocate sufficient resources for an alarm notifier – both in the alarm server, and in the window server that would display the alarm.  There must be no risk of running out of memory and, therefore, not being able to operate the alarm.

However, as I thought more, I remembered various alarm bugs in Psion devices.

Note: I’ve probably remembered some of the following details wrong – but I think the main gist of the stories is correct.

Insisting on sounding ALL the alarms

The first was from before I started at Psion, but was a legend that was often discussed. It applied to the alarm functionality in the Psion Organiser II.

On that device, all alarms were held in a queue, and for each alarm, there was a record of whether it had been sounded.  When the device powered up, one of the first thing it would do was to check that queue for the first alarm that had not been sounded.  If it was overdue, it should be sounded immediately.  Once that alarm was acknowledged by the user, the same process should be repeated – find the next alarm that had not been sounded…

But the snag in this system became clear when the user manually advanced the time on the device (for example, on changing timezone, or, more dramatically, restoring the correct time after a system restart).  If a user had set a number of alarms, the device would insist on playing them all, one by one.  The user had no escape!

Buffer overflow (part one)

The next story on my list came to a head on a date something like the 13th of September 1989.  The date is significant – it was the first Wednesday (the day with the longest name) with a two-digit day-in-month in September (the month with the longest name).  You can probably guess how this story ends.

At that time, Psion engineers were creating the MC400 laptop – a device that was in many ways ahead of its time.  (You can see some screenshots here – though none of these shots feature the MC Alarms application.  My contribution to that software, by the way, included the Text Processor application, as well as significant parts of the UI framework.)

On the day in question, several of the prototype MC400 devices stopped working.  They’d all been working fine over the previous month or so.  Eventually we spotted the pattern – they all had alarms due, but the text for the date overflowed the pre-allocated memory storage that had been set aside to compose that text as it was displayed on the screen.  Woops.

“The kind of bug that other operating systems can only dream about”

Some time around 1991 I made a rash statement, which entered into Psion’s in-house listing of ill-guarded comments: “This is the kind of bug that other operating systems can only dream about”.  It was another alarms bug – this time in the Psion Series 3 software system.

It arose when the user had an Agenda file on a memory card (which were known, at the time, as SSDs – Solid State Disks), but had temporarily removed the card.  When the time came to sound an alarm from the Agenda, the alarm server requested the Agenda application to tell it when the next Agenda alarm would be due.  This required the Agenda application to read data from the memory card.  Because the file was already marked as “open”, the File Server in the operating system tried to display a low-level message on the screen – similar to the “Retry, Abort, or Cancel” message that users of MS-DOS might remember.  This required action from the Window Server, but the Window Server was temporarily locked, waiting for a reply from the Alarm Server.  The Alarm Server was in turn locked, waiting for the File Server – which, alas, was waiting (as previously mentioned) for the Window Server.  Deadlock.

Well, that’s as much as I can recall at the moment, but I do remember it being said at the time that the deadlock chain actually involved five interconnecting servers, so I may have forgotten some of the subtleties.  Either way, the result was that the entire device would freeze.  The only sign of life was that the operating system would still emit keyclicks when the user pressed keys – but the Window Server was unable to process these keys.

In practice, this bug would tend to strike unsuspecting users who had opened an SSD door at the time the alarm happened to be due – even the SSD door on the other side of the device (an SSD could be inserted on each side).  The hardware was unable to read from one SSD, even if it was still in place, if the other door happened to be open.  As you can imagine, this defect took some considerable time to track down.

“Death city Arizona”

At roughly the same time, an even worse alarms-related bug was uncovered.  In this case, the only way out was a cold reset, that lost all data on internal memory.  The recipe to obtain the bug went roughly as follows:

  • Supplement the built-in data of cities and countries, by defining a new city, which would be your home town
  • Observe that the operating system created a file “World.wld” somewhere on internal memory, containing the details of all the cities whose details you had added or edited
  • Find a way to delete that file
  • Restart the device.

In those days of limited memory, every extra server was viewed as an overhead to be avoided if possible.  For this reason, the Alarm Server and the World Server coexisted inside a single process, sharing as many resources as possible.  The Alarm Server managed the queue of alarms, from all different applications, and the World Server looked after access to the set of information about cities and countries.  For fast access during system startup, the World Server stored some information about the current home city.  But if the full information about the home city couldn’t be retrieved (because, for example, the user had deleted the World.wld file), the server went into a tailspin, and crashed.  The lower level operating system, noticing that a critical resource had terminated, helpfully restarted it – with identical conclusions.  Result: the lower priority applications and servers never had a chance to start up.  The user was left staring at a blank screen.

Buffer overflow (part two)

The software that composed the text to appear on the screen, when an alarm sounded, used the EPOC equivalent of “print with formatting”.  For example, a “%d” in the text would be replaced by a numerical value, depending on other parameters passed to the function.  Here, the ‘%’ character has a special meaning.

But what if the text supplied by the user itself contains a ‘%’ character?  For example, the alarm text might be “Revision should be 50% complete by today”.  Well, in at least some circumstances, the software went looking for another parameter passed to it, where none existed.  As you can imagine, all sorts of unintended consequences could result – including memory overflows.

Alarms not sounding!

Thankfully, the bugs above were all caught by in-house testing, before the device in question was released to customers.  We had a strong culture of fierce internal testing.  The last one, however, did make it into the outside world.  It impacted users who had the temerity to do the following:

  • Enter a new alarm in their Agenda
  • Switch the device off, before it had sufficient time to complete all its processing of which alarm would be the next to sound.

This problem hit users who accumulated a lot of data in their Agenda files.  In such cases, the operating system could take a non-negligible amount of time to reliably figure out what the next alarm would be.  So the user had a chance to power down the device before it had completed this calculation.  Given the EPOC focus on keeping the device in a low-power state as much as possible, the “Off” instruction was heeded quickly – too quickly in this case.  If the device had nothing else to do before that alarm was due, and if the user didn’t switch on the device for some other reason in the meantime, it wouldn’t get the chance to work out that it should be sounding that alarm.

Final thoughts re iPhone alarms

Psion put a great deal of thought into alarms:

  • How to implement them efficiently
  • How to ensure that users never missed alarms
  • How to provide the user with a great alarm experience.

For example, when an alarm becomes due on a Psion device, the sound starts quietly, and gradually gets louder.  If the user fails to acknowledge the alarm, the entire sequence repeats, after about one minute, then after about three minutes, and so on.  When the user does acknowledge the alarm, they have the option to stop it, silence it, or snooze it.  Pressing the snooze button adds another five minutes to the time before the alarm will sound again.  Pressing it three times, therefore, adds 15 minutes, and so on.  (And as a touch of grace: if you press the snooze button enough times, it emits a short click, and resets the time delay to five minutes – useful for sleepyheads who are too tired to take a proper look at the device, but who have enough of a desire to monitor the length of the snooze!)

So it’s surprising to me that Apple, with its famous focus on user experience, seem to have given comparatively little thought to the alarms on that device.  When my wife started using an iPhone in the middle of last year, she found much in it to enchant her – but the alarms were far from delightful.  It seems that the default alarms sound only once, with a rather pathetic little noise which it is easy to miss.  And when we looked, we couldn’t find options to change this behaviour.  I guess the iPhone team has other things on its mind!

27 August 2010

Reconsidering recruitment

Filed under: Accenture, Psion, recruitment, Symbian — David Wood @ 5:12 am

The team at ITjoblog (‘the blog for IT professionals’) recently asked me to write a guest column for them.  It has just appeared: “Reconsidering recruitment“.

With a few slight edits, here’s what I had to say…

Earlier in my career, I was involved in lots of recruitment.  The software team inside Psion followed a steep headcount trajectory through the process of transforming into Symbian, and continued to grow sharply in subsequent years as many new technology areas were added to the scope of Symbian OS.  As one of the senior software managers in the company throughout this period, I found myself time and again in interviewing and recruitment situations.  I was happy to give significant amounts of my time to these tasks, since I knew what a big impact good (or bad) recruitment can make to organisational dynamics.

In recent weeks, I’ve once again found myself in a situation where considerable headcount growth is expected.  I’m working on a project at Accenture, assisting their Embedded Mobility Services group.  Mobile is increasingly a hot topic, and there’s strong demand for people providing expert consuItancy in a variety of mobile development project settings. This experience has led me to review my beliefs about the best way to carry out recruitment in such situations.  Permit me to think aloud…

To start with, I remain a huge fan of graduate recruitment programs.  The best graduates bring fire in their bellies: a “we can transform the world” attitude that doesn’t know what’s meant to be impossible – and often carries it out!  Of course, graduates typically take some time before they can be deployed in the frontline of commercial software development.  But if you plan ahead, and have effective “bootcamp” courses, you’ll have new life in your teams soon enough.  There will be up-and-coming stars ready to step into the shoes left by any unexpected staff departures or transfers.  If you can hire a group of graduates at the same time, so much the better.  They can club together and help each other, sharing and magnifying what they each individually learn from their assigned managers and mentors.  That’s the beauty of the network effect.

That’s just one examples of the importance of networks in hiring.  I place a big value on having prior knowledge of someone who is joining your team.  Rather than having to trust your judgement during a brief interviewing process, and whatever you can distill from references, you can rely on actual experience of what someone is like to work with.  This effect becomes more powerful when several of your current workforce can attest to the qualities of a would-be recruit, based on all having worked together at a previous company in the past.  I saw Symbian benefit from this effect via networks of former Nortel employees who all knew each other and who could vouch for each others’ capabilities during the recruitment process.  Symbian also had internal networks of former high-calibre people from SCO, and from Ericsson, among other companies.  The benefit here isn’t just that you know that someone is a great professional.  It’s that you already know what their particular special strengths are.  (“I recommend that you give this task to Mike.  At our last company, he did a fantastic job of a similar task.”)

Next, I recommend hiring for flexibility, rather than simply trying to fit a current task description.  I like to see evidence of people coping with ambiguity, and delivering good results in more than one kind of setting.  That’s because projects almost always change; likewise for organisational structures.  So while interviewing, I’m not trying to assess if the person I’m interviewing is the world expert in, say, C++ templates.  Instead, I’m looking for evidence that they could turn their hand to mastering whole new skill areas – including areas that we haven’t yet realised will be important to future projects.

Similarly, rather than just looking for rational intelligence skills, I want to see evidence that someone can fit well into teams.  “Soft skills”, such as inter-personal communication and grounded optimism, aren’t just an optional extra, even for roles with intense analytic content.  The best learning and the best performance comes from … networks (to use that word again) – but you can’t build high-functioning networks if your employees lack soft skills.

Finally, high-performing teams that address challenging problems benefit from internal variation.  So don’t just look for near-clones of people who already work for you.  When scanning CVs, keep an eye open for markers of uniqueness and individuality.  At interview, these markers provide good topics to explore – where you can find out something of the underlying character of the candidate.

Inevitably, you’ll sometimes make mistakes with recruitment, despite taking lots of care in the process.  To my mind, that’s OK.  In fact, it’s better to take a few risks, since you can find some excellent new employees in the process.  But you need to have in place a probation period, during which you pay close attention to how your hires are working out.  If a risky candidate turns out disappointing, even after some coaching and support, then you should act fast – for the sake of everyone concerned.

In summary, I see recruitment and induction as a task that deserves high focus from some of the most skilled and perceptive members of your existing workforce.  Skimp on these tasks and your organisation will suffer – sooner or later.  Invest well in these tasks, and you should see the calibre of your workforce steadily grow.

For further discussion, let me admit that rules tend to have limits and exceptions.  You might find it useful to identify limits and counter-examples to the rules of thumb I’ve outlined above!

19 May 2010

Chapter finished: A journey with technology

Five more days have passed, and I’ve completed another chapter draft (see snapshot below) of my proposed new book.

This takes me up to 30% of what I hope to write:

  • I’ve drafted three out of ten planned chapters.
  • The wordcount has reached 15,000, out of a planned total of 50,000.

After this, I plan to dig more deeply into specific technology areas.  I’ll be moving further out of my comfort area.  First will be “Health”.  Fortuitously, I spent today at an openMIC meeting in Bath, entitled “i-Med: Serious apps for mobile healthcare”.  That provided me with some useful revision!

========

3. A journey with technology

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

Here’s the key question I want to start answering in this chapter: how quickly can technology progress in the next few decades?

This is far from being an academic question. At heart, I want to know whether it’s feasible for that progress to be quick enough to provide technological solutions to the calamitous issues and huge opportunities described in the first chapter of this book. The progress must be quick enough, not only for core technological research, but also for productisation of that technology into the hands of billions of consumers worldwide.

For most of this book, I’ll be writing about technologies from an external perspective. I have limited direct experience with, for example, the healthcare industry and the energy industry. What I have to say about these topics will be as, I hope, an intelligent outside observer. But in this chapter, I’m able to adopt an internal perspective, since the primary subject matter is the industry where I worked for more than twenty years: the smartphone industry.

In June 1988, I started work in London at Psion PLC, the UK-based manufacturer of electronic organisers. I joined a small team working on the software for a new generation of mobile computers. In the years that followed, I spent countless long days, long nights and (often) long weekends architecting, planning, writing, integrating, debugging and testing Psion’s software platforms. In due course, Psion’s software would power more than a million PDAs in the “Series 3” family of devices. However, the term “PDA” was unknown in 1988; likewise for phrases like “smartphone”, “palmtop computer”, and “mobile communicator”. The acronym “PDA”, meaning “personal digital assistant”, was coined by Apple in 1992 in connection with their ambitious but flawed “Newton” project – long before anyone conceived of the name “iPhone”.

I first became familiar with the term “smartphone” in 1996, during early discussions with companies interested in using Psion’s “EPOC32” software system in non-PDA devices. After a faltering start, these discussions gathered pace. In June 1998, ten years after I had joined Psion, a group of Psion senior managers took part in the announcement of the formation of a new entity, Symbian Ltd, which had financial backing from the three main mobile phone manufacturers of the era – Ericsson, Motorola, and Nokia. Symbian would focus on the software needs of smartphones. The initial software, along with 150 employees led by a 5 man executive team, was contributed by Psion. In the years that followed, I held Symbian executive responsibility, at different times, for Technical Consulting, Partnering, and Research. In due course, sales of devices based on Symbian OS exceeded 250 million devices.

In June 2008 – ten more years later, to the day – another sweeping announcement was made. The source code of Symbian OS, along with that of the S60 UI framework and applications from Nokia, would become open source, and would be overseen by a new independent entity, the Symbian Foundation.

My views on the possibilities for radical improvements in technology as a whole are inevitably coloured by my helter-skelter experiences with Psion and Symbian. During these 20+ years of intense projects following close on each others’ heels, I saw at first hand, not only many issues with developing and productising technology, but also many issues in forecasting the development and productisation of technology.

For example, the initial June 1998 business plans for Symbian are noteworthy both for what we got right, and for what we got wrong.

3.1 Successes and shortcomings in predicting the future of smartphones

In June 1998, along with my colleagues on the founding team at Symbian, I strove to foresee how the market for smartphones would unfold in the years ahead. This forecast was important, as it would:

  • Guide our own investment decisions
  • Influence the investment decisions of our partner companies
  • Set the context for decisions by potential employees whether or not to join Symbian (and whether or not to remain with Symbian, once they had joined).

Many parts of our vision turned out correct:

  • There were big growths in interest in computers with increased mobility, and in mobile phones with increased computing capability.
  • Sales of Symbian-powered mobile devices would, by the end of the first decade of the next century, be measured in 100s of millions.
  • Our phrase, “Smartphones for all”, which initially struck many observers as ridiculous, became commonplace: interest in smartphones stopped being the preserve of a technologically sophisticated minority, and became a mainstream phenomenon.
  • Companies in numerous industries realised that they needed strong mobile offerings, to retain their relevance.
  • Rather than every company developing its own smartphone platform, there were big advantages for companies to collaborate in creating shared standard platforms.
  • The attraction of smartphones grew, depending on the availability of add-on applications that delivered functionality tailored to the needs of individual users.

Over the next decade, a range of new features became increasingly widespread on mobile phones, despite early scepticism:

  • Colour screens
  • Cameras – and video recorders
  • Messaging: SMS, simple email, rich email…
  • Web browsing: Google, Wikipedia, News…
  • Social networking: Facebook, Twitter, blogs…
  • Games – including multiplayer games
  • Maps and location-based services
  • Buying and selling (tickets, vouchers, cash).

By 2010, extraordinarily powerful mobile devices are in widespread use in almost every corner of the planet. An average bystander transported from 1998 to 2010 might well be astonished at the apparently near-magical capabilities of these ubiquitous devices.

On the other hand, many parts of our 1998 vision proved wrong.

First, we failed to foresee many of the companies that would be the most prominent in the smartphone industry by the end of the next decade. In 1998:

  • Apple seemed to be on a declining trajectory.
  • Google consisted of just a few people working in a garage. (Like Symbian, Google was founded in 1998.)
  • Samsung and LG were known to the Symbian team, but we decided not to include them on our initial list of priority sales targets, in view of their lowly sales figures.

Second, although our predictions of eventual sales figures for Symbian devices were broadly correct – namely 100s of millions – this was the result of two separate mistakes cancelling each other out:

  • We expected to have a higher share of the overall mobile phone market (over 50% – perhaps even approaching 100%).
  • We expected that overall phone market to remain at the level of 100s of millions per annum – we did not imagine it would become as large as a billion per year.

(A smaller-than-expected proportion of a larger-than-expected market worked out at around the same volume of sales.)

Third – and probably most significant for drawing wider lessons – we got the timescales significantly wrong. It took considerably longer than we expected for:

  • The first successful smartphones to become available
  • Next generation networks (supporting high-speed mobile data) to be widely deployed
  • Mobile applications to become widespread.

Associated with this, many pre-existing systems remained in place much longer than anticipated, despite our predictions that they would fail to be able to adapt to changing market demands:

  • RIM sold more and more BlackBerries, despite repeated concerns that their in-house software system would become antiquated.
  • The in-house software systems of major phone manufacturers, such as Nokia’s Series 40, likewise survived long past predicted “expiry” dates.

To examine what’s going on, it’s useful to look in more detail at three groups of factors:

  1. Factors accelerating growth in the smartphone market
  2. Factors restricting growth in the smartphone market
  3. Factors that can overcome the restrictions and enable faster growth.

Having reviewed these factors in the case of smartphone technology, I’ll then revisit the three groups of factors, with an eye to general technology.

3.2 Factors accelerating growth in the smartphone market

The first smartphone sales accelerator is decreasing price. Smartphones increase in popularity because of price reductions. As the devices become less expensive, more and more people can afford them. Other things being equal, a desirable piece of consumer electronics that has a lower cost will sell more.

The underlying cost of smartphones has been coming down for several reasons. Improvements in underlying silicon technology mean that manufacturers can pack more semiconductors on to the same bit of space for the same cost, creating more memory and more processing power. There are also various industry scale effects. Companies who work with a mobile platform over a period of time gain the benefit of “practice makes perfect”, learning how to manage the supply chain, select lower price components, and assemble and manufacture their devices at ever lower cost.

A second sales accelerator is increasing reliability. With some exceptions (that have tended to fall by the wayside), smartphones have become more and more reliable. They start faster, have longer battery life, and need fewer resets. As such, they appeal to ordinary people in terms of speed, performance, and robustness.

A third sales accelerator is increasing stylishness. In the early days of smartphones, people would often say, “These smartphones look quite interesting, but they are a bit too big and bulky for my liking: frankly, they look and feel like a brick.” Over time, smartphones became smaller, lighter, and more stylish. In both their hardware and their software, they became more attractive and more desirable.

A fourth sales accelerator is increasing word of mouth recommendations. The following sets of people have all learned, from their own experience, good reasons why consumers should buy smartphones:

  • Industry analysts – who write reports that end up influencing a much wider network of people
  • Marketing professionals – who create compelling advertisements that appear on film, print, and web
  • Retail assistants – who are able to highlight attractive functionality in devices, at point of sale
  • Friends and acquaintances – who can be seen using various mobile services and applications, and who frequently sing the praises of specific devices.

This extra word of mouth exists, of course, because of a fifth sales accelerator – the increasing number of useful and/or entertaining mobile services that are available. This includes built-in services as well as downloadable add-on services. More and more individuals learn that mobile services exist which address specific problems they experience. This includes convenient mobile access to banking services, navigation, social networking, TV broadcasts, niche areas of news, corporate databases, Internet knowledgebases, tailored educational material, health diagnostics, and much, much more.

A sixth sales accelerator is increasing ecosystem maturity. The ecosystem is the interconnected network of companies, organisations, and individuals who create and improve the various mobile services and enabling technology. It takes time for this ecosystem to form and to learn how to operate effectively. However, in due course, it forms a pool of resources that is much larger than exists just within the first few companies who developed and used the underlying mobile platform. These additional resources provide, not just a greater numerical quantity of mobile software, but a greater variety of different innovative ideas. Some ecosystem members focus on providing lower cost components, others on providing components with higher quality and improved reliability, and yet others on revolutionary new functionality. Others again provide training, documentation, tools, testing, and so on.

In summary, smartphones are at the heart of a powerful virtuous cycle. Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle. Applications and services which prove their value as add-ons for one generation of smartphones become bundled into the next generation. With this extra built-in functionality, the next generation is intrinsically more attractive, and typically is cheaper too. Developers see an even larger market and increase their efforts to supply software for this market.

3.3 Factors restricting growth in the smartphone market

Decreasing price. Increasing reliability. Increasing stylishness. Increasing word of mouth recommendations. Increasingly useful mobile services. Increasing ecosystem maturity. What could stand in the way of these powerful accelerators?

Plenty.

First, there are technical problems with unexpected difficulty. Some problems turn out to be much harder than initially imagined. For example, consider speech recognition, in which a computer can understand spoken input. When Psion planned the Series 5 family of PDAs in the mid 1990s (as successors to the Series 3 family), we had a strong desire to include speech recognition capabilities in the device. Three “dictaphone style” buttons were positioned in a small unit on the outside of the case, so that the device could be used even when the case (a clamshell) was shut. Over-optimistically, we saw speech recognition as a potential great counter to the pen input mechanisms that were receiving lots of press attention at the time, on competing devices like the Apple Newton and the Palm Pilot. We spoke to a number of potential suppliers of voice recognition software, who assured us that suitably high-performing recognition was “just around the corner”. The next versions of their software, expected imminently, would impress us with its accuracy, they said. Alas, we eventually reached the conclusion that the performance was far too unreliable and would remain so for the foreseeable future – even if we went the extra mile on cost, and included the kind of expensive internal microphone that the suppliers recommended. We feared that “normal users” – the target audience for Psion PDAs – would be perplexed by the all-too-frequent inaccuracies in voice recognition. So we took the decision to remove that functionality. In retrospect, it was a good decision. Even ten years later, voice recognition functionality on smartphones generally fell short of user expectations.

Speech recognition is just one example of a deeply hard technical problem, that turned out to take much longer than expected to make real progress. Others include:

  • Avoiding smartphone batteries being drained too quickly, from all the processing that takes place on the smartphone
  • Enabling rapid search of all the content on a device, regardless of the application used to create that content
  • Devising a set of application programming interfaces which have the right balance between power-of-use and ease-of-use, and between openness and security.

Second, there are “chicken-and-egg” coordination problems – sometimes also known as “the prisoner’s dilemma”. New applications and services in a networked marketplace often depend on related changes being coordinated at several different points in the value chain. Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make less sense when viewed individually. For example, successful mobile phones required both networks and handsets. Successful smartphones required new data-enabled networks, new handsets, and new applications. And so on.

Above, I wrote about the potential for “a powerful virtuous cycle”:

Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle.

However, this only works once the various factors are all in place. A new ecosystem needs to be formed. This involves a considerable coordination problem: several different entities need to un-learn old customs, and adopt new ways of operating, appropriate to the new value chain. That can take a lot of time.

Worse – and this brings me to a third problem – many of the key players in a potential new ecosystem have conflicting business models. Perhaps the new ecosystem, once established, will operate with greater overall efficiency, delivering services to customers more reliably than before. However, wherever there are prospects of cost savings, there are companies who potentially lose out – companies who are benefiting from the present high prices. For example, network operators making healthy profits from standard voice services were (understandably) apprehensive about distractions or interference from low-profit data services running over their networks. They were also apprehensive about risks that applications running on their networks would:

  • Enable revenue bypass, with new services such as VoIP and email displacing, respectively, standard voice calls and text messaging
  • Saturate the network with spam
  • Cause unexpected usability problems on handsets, which the user would attribute to the network operator, entailing extra support costs for the operator.

The outcome of these risks of loss of revenue is that ecosystems might fail to form – or, having formed with a certain level of cooperation, might fail to attain deeper levels of cooperation. Vested interests get in the way of overall progress.

A fourth problem is platform fragmentation. The efforts of would-be innovators are spread across numerous different mobile platforms. Instead of a larger ecosystem all pulling in the same direction, the efforts are diffused, with the risk of confusing and misleading participants. Participants think they can re-apply skills and solutions from one mobile product in the context of another, but subtle and unexpected differences cause incompatibilities which can take a lot time to debug and identify. Instead of collaboration effectively turning 1+1 into 3, confusion turns 1+1 into 0.5.

A fifth problem is poor usability design. Even though a product is powerful, ordinary end users can’t work out how to operate it, or get the best experience from it. They feel alienated by it, and struggle to find their favourite functionality in amongst bewildering masses of layered menu options. A small minority of potential users, known as “technology enthusiasts”, are happy to use the product, despite these usability issues; but they are rare exceptions. As such, the product fails to “cross the chasm” (to use the language of Geoffrey Moore) to the mainstream majority of users.

The sixth problem underlies many of the previous ones: it’s the problem of accelerating complexity. Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos can ensue:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window.
  • Smartphone application development may grow in difficulty, as developers need to juggle different programming interfaces and optimisation methods.
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.

3.4 Factors that can overcome the restrictions and enable faster growth

Technical problems with unexpected difficulty. Chicken-and-egg coordination problems. Conflicting business models. Platform fragmentation. Poor usability design. Accelerating complexity. These are all factors that restrict smartphone progress. Without solving these problems, the latent potential of smartphone technology goes unfulfilled. What can be done about them?

At one level, the answer is: look at the companies who are achieving success with smartphones, despite these problems, and copy what they’re doing right. That’s a good starting point, although it risks being led astray by instances where companies have had a good portion of luck on their side, in addition to progress that they merited through their own deliberate actions. (You can’t jump from the observation that company C1 took action A and subsequently achieved market success, to the conclusion that company C2 should also take action A.) It also risks being led astray by instances where companies are temporarily experiencing significant media adulation, but only as a prelude to an unravelling of their market position. (You can’t jump from the observation that company C3 is currently a media darling, to the conclusion that a continuation of what it is currently doing will achieve ongoing product success.) With these caveats in mind, here is the advice that I offer.

The most important factor to overcome these growth restrictions is expertise – expertise in both design and implementation:

  • Expertise in envisioning and designing products that capture end-user attention and which are enjoyable to use again and again
  • Expertise in implementing an entire end-to-end product solution.

The necessary expertise (both design and implementation) spans eight broad areas:

  1. technology – such as blazing fast performance, network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security
  2. ecosystem design – to solve the “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised
  3. business models – identifying new ways in which groups of companies can profit from adopting new technology
  4. community management – encouraging diverse practitioners to see themselves as part of a larger whole, so that they are keen to contribute
  5. user experience – to ensure that the resulting products will be willingly accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts)
  6. agile project management – to avoid excess wasted investment in cases where project goals change part way through (as they inevitably do, due to the uncertain territory being navigated)
  7. lean thinking – including a bias towards practical simplicity, a profound distrust of unnecessary complexity, and a constant desire to identify and deal with bottleneck constraints
  8. system integration – the ability to pull everything together, in a way that honours the core product proposition, and which enables subsequent further evolution.

To be clear, I see these eight areas of expertise as important for all sectors of complex technology development – not just in the smartphone industry.

Expertise isn’t something that just exists in books. It manifests itself:

  • In individual people, whose knowledge spans different domains
  • In teams – where people can help and support each other, playing to everyone’s strengths
  • In tools and processes – which are the smart embodiment of previous generations of expertise, providing a good environment to work out the next generation of expertise.

In all three cases, the expertise needs to be actively nurtured and enhanced. Companies who under-estimate the extent of the expertise they need, or who try to get that expertise on the cheap – or who stifle that expertise under the constraints of mediocre management – are likely to miss out on the key opportunities provided by smartphone technology. (Just because it might appear that a company finds it easy to do various tasks, it does not follow that these tasks are intrinsically easy to carry out. True experts often make hard tasks look simple.)

But even with substantial expertise available and active, it remains essentially impossible to be sure about the timescales for major new product releases:

  • Novel technology problems can take an indeterminate amount of time to solve
  • Even if the underlying technology progresses quickly, the other factors required to create an end-to-end solution can fall foul of numerous unforeseen delays.

In case that sounds like a depressing conclusion, I’ll end this section with three brighter thoughts:

First, if predictability is particularly important for a project, you can increase your chances of your project hitting its schedule, by sticking to incremental evolutions of pre-existing solutions. That can take you a long way, even though you’ll reduce the chance of more dramatic breakthroughs.

Second, if you can afford it, you should consider running two projects in parallel – one that sticks to incremental evolution, and another that experiments with more disruptive technology. Then see how they both turn out.

Third, the relationship between “speed of technology progress” and “speed of product progress” is more complex than I’ve suggested. I’ve pointed out that the latter can lag the former, especially where there’s a shortage of expertise in fields such as ecosystem management and the creation of business models. However, sometimes the latter can move faster than the former. That occurs once the virtuous cycle is working well. In that case, the underlying technological progress might be exponential, whilst the productisation progress could become super-exponential.

3.5 Successes and shortcomings in predicting the future of technology

We all know that it’s a perilous task to predict the future of technology. The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars. These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary. Indeed, history is littered with curious and amusing examples of flawed predictions of the future. Popular science fiction fares no better:

  • The TV series “Lost in space”, which debuted in 1965, featured a manned spacecraft leaving Earth en route for a distant star, Alpha Centauri, on 16 October 1997.
  • Arthur C Clarke’s “2001: a space odyssey”, made in 1968, featured a manned spacecraft flight to Jupiter.
  • Philip K Dick’s novel “Do Androids Dream of Electric Sheep?”, coincidentally also first published in 1968, described a world set in 1992 in which androids (robots) are extremely hard to distinguish from humans. (Later editions of the novel changed the date to 2021 – the date adopted by the film Bladerunner which was based on the novel.)

Forecasts often go wrong when they spot a trend, and then extrapolate it. Projecting trends into the future is a dangerous game:

  • Skyscrapers rapidly increased in height in the early decades of the 20th century. But after the Empire State Building was completed in 1931, the rapid increases stopped.
  • Passenger aircraft rapidly increased in speed in the middle decades of the 20th century. But after Concorde, which flew its maiden flight in 1969, there have been no more increases.
  • Manned space exploration went at what might be called “rocket pace” from the jolt of Sputnik in 1957 up to the sets of footprints on the Moon in 1969-1972, but then came to an abrupt halt. At the time of writing, there are still no confirmed plans for a manned trip to Mars.

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects. Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on. Just because a technology improved in the past, it does not mean there will be sufficient societal motivation to keep on improving it in the future. Technology is not enough. Especially for changes that are complex and demanding, up to six additional criteria need to be satisfied as well:

  1. The technological development has to satisfy a strong human need.
  2. The development has to be possible at a sufficiently attractive price to individual end users.
  3. The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle.
  4. There must be a clear implementation path whereby the eventual version of the technology can be attained through a series of steps that are, individually, easier to achieve.
  5. When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be open (to accept new ideas).
  6. Likewise, the development process must be commercially attractive, or provide some other strong incentive, to encourage the generation of new ideas, and, even more important, to encourage people to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation.

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications. For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son. The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century. The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world! As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As mentioned earlier, an important factor that can allow accelerating technological progress is the establishment of an operational virtuous cycle that provides positive feedback. Here are four more examples:

  1. The first computers were designed on paper and built by hand. Later computers benefited from computer-aided design and computer-aided manufacture. Even later computers benefit from even better computer-aided design and manufacture…
  2. Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly – including more powerful tools…
  3. More powerful hardware enables new software which enables new use cases which demand more innovation in improving the hardware further…
  4. Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…

A well-functioning virtuous cycle makes it more likely that technological progress can continue. But the biggest factor determining whether a difficult piece of progress occurs is often the degree of society’s motivation towards that progress. Investment in ever-faster passenger airlines ceased, because people stopped perceiving that ever-faster airlines were that important. Manned flight to Mars was likewise deemed to be insufficiently important: that’s why it didn’t take place. The kinds of radical technological progress that I discuss in this book are, I believe, all feasible, provided sufficient public motivation is generated and displayed in support of that progress. This includes major enhancements in health, education, clean energy, artificial general intelligence, human autonomy, and human fulfilment. The powerful public motivation will cause society to prioritise developing and supporting the types of rich expertise that are needed to make this technological progress a reality.

3.6 Moore’s Law: A recap

When I started work at Psion, I was given a “green-screen” console terminal, connected to a PDP11 minicomputer running VAX VMS. That’s how I wrote my first pieces of software for Psion. A short while afterwards, we started using PCs. I remember that the first PC I used had a 20MB hard disk. I also remember being astonished to find that a colleague had a hard disk that was twice as large. What on earth does he do with all that disk space, I wondered. But before long, I had a new PC with a larger hard disk. And then, later, another new one. And so on, throughout my 20+ year career in Psion and Symbian. Each time a new PC arrived, I felt somewhat embarrassed at the apparent excess of computing power it provided – larger disk space, more RAM memory, faster CPU clock speed, etc. On leaving Symbian in October 2009, I bought a new laptop for myself, along with an external USB disk drive. That disk drive was two terabytes in size. For roughly the same amount of money (in real terms) that had purchased 20MB of disk memory in 1989, I could now buy a disk that was 100,000 times larger. That’s broadly equivalent to hard disks doubling in size every 15 months over that 20 year period.

This repeated doubling of performance, on a fairly regular schedule, is a hallmark of what is often called “Moore’s Law”, following a paper published in 1965 by Gordon Moore (subsequently one of the founders of Intel). It’s easy to find other examples of this exponential trend within the computing industry. University of London researcher Shane Legg has published a chart of the increasing power of the world’s fastest supercomputers, from 1960 to the present day, along with a plausible extension to 2020. This chart measures the “FLOPS” capability of each supercomputer – the number of floating point (maths) operations it can execute in a second. The values move all the way from kiloFLOPS through megaFLOPS, gigaFLOPS, teraFLOPS, and petaFLOPS, and point towards exaFLOPS by 2020. Over sixty years, the performance improves through twelve and a half orders of magnitude, which is more than 40 doublings. This time, the doubling period works out at around 17 months.

Radical futurist Ray Kurzweil often uses the following example:

When I was an MIT undergraduate in 1965, we all shared a computer that took up half a building and cost tens of millions of dollars. The computer in my pocket today [a smartphone] is a million times cheaper and a thousand times more powerful. That’s a billion-fold increase in the amount of computation per dollar since I was a student.

A billion-fold increase consists of 30 doublings – which, spread out over 44 years from 1965 to 2009, gives a doubling period of around 18 months. And to get the full picture of the progress, we should include one more observation alongside the million-fold price improvement and thousand-fold processing power improvement: the 2009 smartphone is about one hundred thousand times smaller than the 1965 mainframe.

These steady improvements in computer hardware, spread out over six decades so far, are remarkable, but they’re not the only example of this kind of long-term prodigious increase. Martin Cooper, who has a good claim to be considered the inventor of the mobile phone, has pointed out that the amount of information that can be transmitted over useful radio spectrum has roughly doubled every 30 months since 1897, when Guglielmo Marconi first patented the wireless telegraph:

The rate of improvement in use of the radio spectrum for personal communications has been essentially uniform for 104 years. Further, the cumulative improvement in the effectiveness of personal communications total spectrum utilization has been over a trillion times in the last 90 years, and a million times in the last 45 years

Smartphones have benefited mightily from both Moore’s Law and Cooper’s Law. Other industries can benefit in a similar way too, to the extent that their progress can be driven by semiconductor-powered information technology, rather than by older branches of technology. As I’ll review in later chapters, there are good reasons to believe that both medicine and energy are on the point of dramatic improvements along these lines. For example, the so-called Carlson curves (named after biologist Rob Carlson) track exponential decreases in the costs of both sequencing (reading) and synthesising (writing) base pairs of DNA. It cost about $10 to sequence a single base pair in 1990, but this had reduced to just 2 cents by 2003 (the date of the completion of the human genome project). That’s 9 doublings in just 13 years – making a doubling period of around 17 months.

Moore’s Law and Cooper’s Law are far from being mathematically exact. They should not be mistaken for laws of physics, akin to Newton’s Laws or Maxwell’s Laws. Instead, they are empirical observations, with lots of local deviations when progress temporarily goes either faster or slower than the overall average. Furthermore, scientists and researchers need to keep on investing lots of skill, across changing disciplines, to keep the progress occurring. The explanation given on the website of Martin Cooper’s company, ArrayComm, provides useful insight:

How was this improvement in the effectiveness of personal communication achieved? The technological approaches can be loosely categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum.

How much of the improvement can be attributed to each of these categories? Of the million times improvement in the last 45 years, roughly 25 times were the result of being able to use more spectrum, 5 times can be attributed to the ability to divide the radio spectrum into narrower slices — frequency division. Modulation techniques like FM, SSB, time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so. The remaining sixteen hundred times improvement was the result of confining the area used for individual conversations to smaller and smaller areas — what we call spectrum re-use…

Cooper suggests that his law can continue to hold until around 2050. Experts at Intel say they can foresee techniques to maintain Moore’s Law for at least another ten years – potentially longer. In assessing the wider implications of these laws, we need to consider three questions:

  1. How much technical runway is left in these laws?
  2. Can the benefits of these laws in principle be applied to transform other industries?
  3. Will wider system effects – as discussed earlier in this chapter – frustrate overall progress in these industries (despite the technical possibilities), or will they in due course even accelerate the underlying technical progress?

My answers to these questions:

  1. Plenty
  2. Definitely
  3. It depends on whether we can educate, motivate, and organise a sufficient critical mass of concerned citizens. The race is on!

>> Next chapter >>

18 February 2010

Coping without my second brain

Filed under: intelligence, Psion — David Wood @ 6:06 pm

Every so often, my current Psion Series 5mx PDA develops a fault in its screen display.  Due to repeated stress on the cable joining the screen to the main body of the device, the connectors in the cable fail.

When that happens, all I can see on the screen is a series of horizontal lines, looking a bit like an extract of a bar code:

I find that, with my pattern of using the Psion device, this problem arises roughly once every 6-12 months.  It’s because I open and shut the device numerous times most waking hours – in order to access the applications on the device which help me to manage my life: Agenda, Contacts, To-do, Alarms, numerous documents and spreadsheets, and so on.  The heavy usage magnifies the stress on the cable.

I can manage my life with these applications provided the screen is working.

When the screen cable fault occurs, I can sometimes mitigate the problem by viewing the screen at a half-open angle.  I presume that, with less stress on the cable, the connectors are able to work properly again.  However, using the device in a propped partially-open state is hardly an ideal ergonomic experience.

Because I know this fault will eventually afflict all the S5mx devices I use, I keep a backup device – bought from EBay.  Alas, my current device developed this problem when I opened it last Saturday, as I sat down in the airplane to fly from Heathrow to Barcelona, for this week’s Mobile World Congress event.  My backup device is still at home in London.  Worse, the usual remediation step did not work in this case: the screen was unviewable even when partially open.

Hmm – I thought to myself – maybe this will be a chance to see how well I can function without the device I often think of as my second brain.

The answer: it has been hard!  Details of my hotel, as well as other logistics matters and appointment details, are stored inside the S5mx.

To restore at least an element of personal productivity, I copied a few key files from the Psion to my laptop, and started up the PC emulator of this device.  It took me a while to remember how to configure the emulator (but I found the details via Google – part of my third brain).  My heart started to beat normally again, as my Agenda showed up on my laptop screen:

By means of this PC emulator, I was able to find out where I should be at various times, and so on.

On the other hand, my laptop is significantly less convenient than the pocket-occupying, instant-on Psion device.  Time and again over the last few days, I’ve scribbled notes on pieces of paper, and been slow to identify times in my schedule when I would be able to slot in new meetings.  It’s been a strain.

I feel a little bit like the character Manfred who has his personal glasses stolen (by “Spring-Heeled Jack”) at the start of Chapter 3 of of Charlie Stross‘s magnificent book Accelerando:

Spring-Heeled Jack runs blind, blue fumes crackling from his heels. His right hand, outstretched for balance, clutches a mark’s stolen memories. The victim is sitting on the hard stones of the pavement behind him. Maybe he’s wondering what’s happened; maybe he looks after the fleeing youth. But the tourist crowds block the view effectively, and in any case, he has no hope of catching the mugger. Hit-and-run amnesia is what the polis call it, but to Spring-Heeled Jack it’s just more loot to buy fuel for his Russian army-surplus motorized combat boots.

* * *

The victim sits on the cobblestones clutching his aching temples. What happened? he wonders. The universe is a brightly colored blur of fast-moving shapes augmented by deafening noises. His ear-mounted cameras are rebooting repeatedly: They panic every eight hundred milliseconds, whenever they realize that they’re alone on his personal area network without the comforting support of a hub to tell them where to send his incoming sensory feed. Two of his mobile phones are bickering moronically, disputing ownership of his grid bandwidth, and his memory … is missing.

A tall blond clutching an electric chainsaw sheathed in pink bubble wrap leans over him curiously: “you all right?” she asks.

“I –” He shakes his head, which hurts. “Who am I?” His medical monitor is alarmed because his blood pressure has fallen: His pulse is racing, his serum cortisol titer is up, and a host of other biometrics suggest that he’s going into shock.

“I think you need an ambulance,” the woman announces. She mutters at her lapel, “Phone, call an ambulance. ” She waves a finger vaguely at him as if to reify a geolink, then wanders off, chain-saw clutched under one arm. Typical southern émigré behavior in the Athens of the North, too embarrassed to get involved. The man shakes his head again, eyes closed, as a flock of girls on powered blades skid around him in elaborate loops. A siren begins to warble, over the bridge to the north.

Who am I? he wonders. “I’m Manfred,” he says with a sense of stunned wonder. He looks up at the bronze statue of a man on a horse that looms above the crowds on this busy street corner. Someone has plastered a Hello Cthulhu! holo on the plaque that names its rider: Languid fluffy pink tentacles wave at him in an attack of kawaii. “I’m Manfred – Manfred. My memory. What’s happened to my memory?” Elderly Malaysian tourists point at him from the open top deck of a passing bus. He burns with a sense of horrified urgency. I was going somewhere, he recalls. What was I doing? It was amazingly important, he thinks, but he can’t remember what exactly it was. He was going to see someone about – it’s on the tip of his tongue –

When I reach home again this evening, I’ll copy all my data files to my backup second brain, and (all being well) I’ll be back to my usual level of personal organisation and effectiveness.

10 February 2010

The mobile multitasking advantage

Filed under: Android, applications, architecture, iPhone, multitasking, Psion, universities — David Wood @ 11:48 am

How important is it for a mobile device to support background multitasking?

Specifically, how important is it that users can install, onto the device, applications which will continue to run well in background whilst the user is simultaneously using the device for another purpose?

Humans are multitasking creatures.  We get involved in many activities simultaneously: listening to music, browsing the web, holding conversations, taking notes, staying on the alert for interruptions… – so shouldn’t our mobile devices support this model of working?

One argument is that this feature is not important.  That’s because the Apple iPhone fails to offer it, and the sales of the iPhone don’t seem to have suffered as a result.  The applications built into the iPhone continue to operate in background, but downloaded apps don’t.  iPhone apps continue to sell well.  Conclusion: mobile multitasking has little importance in the real world.  Right?

But that’s a weak argument.  Customer sentiment can change.  If users start talking about use cases which the iPhone fails to support – and which other smartphones support well – then public perception of the fitness of the iPhone system software could suffer a significant downturn.  (“iPhone apps – they’re so 2009…”)

How about Android?  That offers background multitasking.  But does it do it well?

My former colleague Brendan Donegan has been putting an Android phone to serious use, and has noticed some problems in how it works.  He has reported his findings in a series of tweets:

I say, with all honesty that Android’s multitasking is a huge travesty. Doesn’t even deserve to be called that

Poor prioritisation of tasks. Exemplar use-case – Spotify [music playing app] + camera

Spotify will jitter and the photo will be taken out of sync with flash, giving a whited out image

Symbian of course handles the same use case flawlessly

Android really is just not up to doing more than one ‘intensive’ task at a time

Even the [built-in] Android music player skips when taking a photo

(Brendan has some positive remarks about his Android phone too, by the way.)

Mark Wilcox suggests a diagnosis:

sounds like the non-real-time, high interrupt latency on Linux is causing some problems in multimedia use cases

Personally, I find this discussion fascinating – on both an architecture level and a usability level.  I see a whole series of questions that need answers:

  1. Are these results applicable just to one Android phone, or are they intrinsic to the whole platform?
  2. Could these problems be fixed by fairly simple software modifications, or are they more deeply rooted?
  3. How do other mobile platforms handle similar use cases?  What about feature phone platforms?
  4. How important is the use case of playing music in background, while taking a photograph?  Are there other use cases that could come to be seen as more significant?

Perhaps this is a good topic for a university research project.  Any takers?

(Related to this, it would be interesting to know more about the background processing abilities of modern feature phones.  For example, it used to be the case that some feature phones would discard the contents of partially written text messages if there was an incoming voice call.  Has anyone looked into this recently?)

Regardless of the merits of these particular use cases, I am convinced that software responsiveness is important.  If the software system is tied up attending to task A when I want it to do task B, I’m frustrated.  I don’t think I’m alone in this feeling.

My 1990’s Psion PDA typically runs more than a dozen apps in parallel (several word processors, spreadsheeets, databases, plus an individual agenda, tube map app, calculator, and so on) and switches instantly between them.  That sets my baseline expectation.

Here’s another mobile use case that’s on my mind a lot these days.  It applies, not to a PDA or mobile phone, but to my laptop.  It’s not (I think) a device problem, but a wider system problem, involving network connectivity:

  • I frequently find myself in mobile situations where I’m browsing websites on my laptop (for example, on the train), and the pages take ages to load;
  • The signal indicator on the built-in wireless modem app says there’s a strong signal, but for some reason, wireless traffic is squeezed;
  • I sit watching empty tabs on my Firefox browser, waiting and waiting and waiting for content to appear;
  • In frustration, I’ll often open another tab, and try to visit the BBC website – to rule out the possibility that the server for the other web pages(s) has gone down – but that gives me another blank page;
  • Eventually, things recover, but in the meantime, I’ve been left twiddling my thumbs.

When I switch to a WiFi connection instead of a cellular connection, things are usually better – though I’ve had the same bitter experience with some WiFi hotspots too (for example, in some Starbucks coffee shops).

So what should the highest priority be for system architects to optimise?  Responsiveness comes high on my own wishlist.  I recognise that this will often require changes in several parts of the software system.

16 October 2009

Personal announcement: Life beyond Symbian

Filed under: Psion, Symbian, Symbian Foundation — David Wood @ 4:19 pm

I have a personal announcement to make: I’m leaving Symbian.

I’ve greatly enjoyed my work of the last 18 months: helping with the preparations and announcement of the Symbian Foundation, and then serving on its Leadership Team as Catalyst and Futurist.

I’m pleased by how much how has been accomplished in a short space of time.  The transition to full open source is well and truly on the way.  The extended Symbian community will shortly be gathering to exchange news and views of progress and opportunities at this year’s SEE09 event in Earls Court, London.  It will be a very busy event, full of insight and announcements, with (no doubt) important new ideas being hatched and reviewed.

On a personal note, I’m proud of the results of my own work on the Symbian blog, and in building and extending Symbian engagement in China, culminating in the recent press release marking a shared commitment by China Mobile and Symbian.  I’m also honoured to have been at the core of a dynamic and energetic leadership team, providing advice and support behind the scenes.

In many ways, my time in the Symbian Foundation has been a natural extension of a 20 year career with what we now call Symbian platform software (and its 16-bit predecessor): 10 years with PDA manufacturer Psion followed by 10 years on the Leadership Team of Symbian Ltd, prior to the launch of the Symbian Foundation.  In summary, I’ve spent 21 hectic years envisioning, architecting, implementing, supporting, and avidly using smart mobile devices.  It’s been a fantastic experience.

However, there’s more to life than smart mobile devices.  For a number of years, I’ve been nursing a growing desire to explore alternative career options and future scenarios. The milestone of my 50th birthday a few months back has helped to intensify this desire.

Anyone who has dipped into my personal blog or followed my tweets will have noticed my deep interest in topics such as: the future of energy, accelerated climate change, accelerated artificial intelligence, looming demographic changes and the longevity dividend, life extension and the future of medicine, nanotechnology, smart robotics, abundance vs. scarcity, and the forthcoming dramatic societal and personal impacts of all of these transformations.  In short, I am fascinated and concerned about the breakthrough future of technology, as well as by the breakthrough future of smartphones.

It’s time for me to spend a few months investigating if I can beneficially deploy my personal skills in advocacy, analysis, coordination, envisioning, facilitation, and troubleshooting (that is, my skills as a “catalyst and futurist”) in the context of some of these other “future of technology” topics.

I’m keeping an open mind to the outcome of my investigation.  I do believe that I need to step back from employment with the Symbian Foundation in order to give that investigation a proper chance to succeed.  I need to open up time for wide-ranging discussions with numerous interesting individuals and companies, both inside and outside the smartphone industry.  I look forward to finding a new way to balance my passionate support for Symbian and smartphones with my concern for the future of technology.

Over the next few days, I’ll be handing over my current Symbian Foundation responsibilities to colleagues and partners.  I’ll become less active on Symbian blogs, forums, and emails.  For those who wish to bid me “bon voyage”, I’ll be happy to chat over a drink at SEE09 – by which time I will have ceased to be an employee with the Symbian Foundation, and will simply be an enthusiastic supporter and well-wisher.

After I leave Symbian, I’ll still be speaking at conferences from time to time – but no longer as a representative of Symbian.  The good news is that Symbian now possesses a strong range of talented spokespeople who will do a fine job of continuing the open dialog with the wider community.

Many thanks are due to my Symbian Foundation colleagues, especially Executive Director Lee Williams and HR Director Steve Warner, for making this transition as smooth as possible.  It’s been a great privilege to work with this extended team!

To reach me in the future, you can use my new email address, davidw AT deltawisdom DOT com.  My mobile phone number will remain the same as before.

Older Posts »

Blog at WordPress.com.