27 August 2010

Reconsidering recruitment

Filed under: Accenture, Psion, recruitment, Symbian — David Wood @ 5:12 am

The team at ITjoblog (‘the blog for IT professionals’) recently asked me to write a guest column for them.  It has just appeared: “Reconsidering recruitment“.

With a few slight edits, here’s what I had to say…

Earlier in my career, I was involved in lots of recruitment.  The software team inside Psion followed a steep headcount trajectory through the process of transforming into Symbian, and continued to grow sharply in subsequent years as many new technology areas were added to the scope of Symbian OS.  As one of the senior software managers in the company throughout this period, I found myself time and again in interviewing and recruitment situations.  I was happy to give significant amounts of my time to these tasks, since I knew what a big impact good (or bad) recruitment can make to organisational dynamics.

In recent weeks, I’ve once again found myself in a situation where considerable headcount growth is expected.  I’m working on a project at Accenture, assisting their Embedded Mobility Services group.  Mobile is increasingly a hot topic, and there’s strong demand for people providing expert consuItancy in a variety of mobile development project settings. This experience has led me to review my beliefs about the best way to carry out recruitment in such situations.  Permit me to think aloud…

To start with, I remain a huge fan of graduate recruitment programs.  The best graduates bring fire in their bellies: a “we can transform the world” attitude that doesn’t know what’s meant to be impossible – and often carries it out!  Of course, graduates typically take some time before they can be deployed in the frontline of commercial software development.  But if you plan ahead, and have effective “bootcamp” courses, you’ll have new life in your teams soon enough.  There will be up-and-coming stars ready to step into the shoes left by any unexpected staff departures or transfers.  If you can hire a group of graduates at the same time, so much the better.  They can club together and help each other, sharing and magnifying what they each individually learn from their assigned managers and mentors.  That’s the beauty of the network effect.

That’s just one examples of the importance of networks in hiring.  I place a big value on having prior knowledge of someone who is joining your team.  Rather than having to trust your judgement during a brief interviewing process, and whatever you can distill from references, you can rely on actual experience of what someone is like to work with.  This effect becomes more powerful when several of your current workforce can attest to the qualities of a would-be recruit, based on all having worked together at a previous company in the past.  I saw Symbian benefit from this effect via networks of former Nortel employees who all knew each other and who could vouch for each others’ capabilities during the recruitment process.  Symbian also had internal networks of former high-calibre people from SCO, and from Ericsson, among other companies.  The benefit here isn’t just that you know that someone is a great professional.  It’s that you already know what their particular special strengths are.  (“I recommend that you give this task to Mike.  At our last company, he did a fantastic job of a similar task.”)

Next, I recommend hiring for flexibility, rather than simply trying to fit a current task description.  I like to see evidence of people coping with ambiguity, and delivering good results in more than one kind of setting.  That’s because projects almost always change; likewise for organisational structures.  So while interviewing, I’m not trying to assess if the person I’m interviewing is the world expert in, say, C++ templates.  Instead, I’m looking for evidence that they could turn their hand to mastering whole new skill areas – including areas that we haven’t yet realised will be important to future projects.

Similarly, rather than just looking for rational intelligence skills, I want to see evidence that someone can fit well into teams.  “Soft skills”, such as inter-personal communication and grounded optimism, aren’t just an optional extra, even for roles with intense analytic content.  The best learning and the best performance comes from … networks (to use that word again) – but you can’t build high-functioning networks if your employees lack soft skills.

Finally, high-performing teams that address challenging problems benefit from internal variation.  So don’t just look for near-clones of people who already work for you.  When scanning CVs, keep an eye open for markers of uniqueness and individuality.  At interview, these markers provide good topics to explore – where you can find out something of the underlying character of the candidate.

Inevitably, you’ll sometimes make mistakes with recruitment, despite taking lots of care in the process.  To my mind, that’s OK.  In fact, it’s better to take a few risks, since you can find some excellent new employees in the process.  But you need to have in place a probation period, during which you pay close attention to how your hires are working out.  If a risky candidate turns out disappointing, even after some coaching and support, then you should act fast – for the sake of everyone concerned.

In summary, I see recruitment and induction as a task that deserves high focus from some of the most skilled and perceptive members of your existing workforce.  Skimp on these tasks and your organisation will suffer – sooner or later.  Invest well in these tasks, and you should see the calibre of your workforce steadily grow.

For further discussion, let me admit that rules tend to have limits and exceptions.  You might find it useful to identify limits and counter-examples to the rules of thumb I’ve outlined above!

19 May 2010

Chapter finished: A journey with technology

Five more days have passed, and I’ve completed another chapter draft (see snapshot below) of my proposed new book.

This takes me up to 30% of what I hope to write:

  • I’ve drafted three out of ten planned chapters.
  • The wordcount has reached 15,000, out of a planned total of 50,000.

After this, I plan to dig more deeply into specific technology areas.  I’ll be moving further out of my comfort area.  First will be “Health”.  Fortuitously, I spent today at an openMIC meeting in Bath, entitled “i-Med: Serious apps for mobile healthcare”.  That provided me with some useful revision!


3. A journey with technology

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

Here’s the key question I want to start answering in this chapter: how quickly can technology progress in the next few decades?

This is far from being an academic question. At heart, I want to know whether it’s feasible for that progress to be quick enough to provide technological solutions to the calamitous issues and huge opportunities described in the first chapter of this book. The progress must be quick enough, not only for core technological research, but also for productisation of that technology into the hands of billions of consumers worldwide.

For most of this book, I’ll be writing about technologies from an external perspective. I have limited direct experience with, for example, the healthcare industry and the energy industry. What I have to say about these topics will be as, I hope, an intelligent outside observer. But in this chapter, I’m able to adopt an internal perspective, since the primary subject matter is the industry where I worked for more than twenty years: the smartphone industry.

In June 1988, I started work in London at Psion PLC, the UK-based manufacturer of electronic organisers. I joined a small team working on the software for a new generation of mobile computers. In the years that followed, I spent countless long days, long nights and (often) long weekends architecting, planning, writing, integrating, debugging and testing Psion’s software platforms. In due course, Psion’s software would power more than a million PDAs in the “Series 3” family of devices. However, the term “PDA” was unknown in 1988; likewise for phrases like “smartphone”, “palmtop computer”, and “mobile communicator”. The acronym “PDA”, meaning “personal digital assistant”, was coined by Apple in 1992 in connection with their ambitious but flawed “Newton” project – long before anyone conceived of the name “iPhone”.

I first became familiar with the term “smartphone” in 1996, during early discussions with companies interested in using Psion’s “EPOC32” software system in non-PDA devices. After a faltering start, these discussions gathered pace. In June 1998, ten years after I had joined Psion, a group of Psion senior managers took part in the announcement of the formation of a new entity, Symbian Ltd, which had financial backing from the three main mobile phone manufacturers of the era – Ericsson, Motorola, and Nokia. Symbian would focus on the software needs of smartphones. The initial software, along with 150 employees led by a 5 man executive team, was contributed by Psion. In the years that followed, I held Symbian executive responsibility, at different times, for Technical Consulting, Partnering, and Research. In due course, sales of devices based on Symbian OS exceeded 250 million devices.

In June 2008 – ten more years later, to the day – another sweeping announcement was made. The source code of Symbian OS, along with that of the S60 UI framework and applications from Nokia, would become open source, and would be overseen by a new independent entity, the Symbian Foundation.

My views on the possibilities for radical improvements in technology as a whole are inevitably coloured by my helter-skelter experiences with Psion and Symbian. During these 20+ years of intense projects following close on each others’ heels, I saw at first hand, not only many issues with developing and productising technology, but also many issues in forecasting the development and productisation of technology.

For example, the initial June 1998 business plans for Symbian are noteworthy both for what we got right, and for what we got wrong.

3.1 Successes and shortcomings in predicting the future of smartphones

In June 1998, along with my colleagues on the founding team at Symbian, I strove to foresee how the market for smartphones would unfold in the years ahead. This forecast was important, as it would:

  • Guide our own investment decisions
  • Influence the investment decisions of our partner companies
  • Set the context for decisions by potential employees whether or not to join Symbian (and whether or not to remain with Symbian, once they had joined).

Many parts of our vision turned out correct:

  • There were big growths in interest in computers with increased mobility, and in mobile phones with increased computing capability.
  • Sales of Symbian-powered mobile devices would, by the end of the first decade of the next century, be measured in 100s of millions.
  • Our phrase, “Smartphones for all”, which initially struck many observers as ridiculous, became commonplace: interest in smartphones stopped being the preserve of a technologically sophisticated minority, and became a mainstream phenomenon.
  • Companies in numerous industries realised that they needed strong mobile offerings, to retain their relevance.
  • Rather than every company developing its own smartphone platform, there were big advantages for companies to collaborate in creating shared standard platforms.
  • The attraction of smartphones grew, depending on the availability of add-on applications that delivered functionality tailored to the needs of individual users.

Over the next decade, a range of new features became increasingly widespread on mobile phones, despite early scepticism:

  • Colour screens
  • Cameras – and video recorders
  • Messaging: SMS, simple email, rich email…
  • Web browsing: Google, Wikipedia, News…
  • Social networking: Facebook, Twitter, blogs…
  • Games – including multiplayer games
  • Maps and location-based services
  • Buying and selling (tickets, vouchers, cash).

By 2010, extraordinarily powerful mobile devices are in widespread use in almost every corner of the planet. An average bystander transported from 1998 to 2010 might well be astonished at the apparently near-magical capabilities of these ubiquitous devices.

On the other hand, many parts of our 1998 vision proved wrong.

First, we failed to foresee many of the companies that would be the most prominent in the smartphone industry by the end of the next decade. In 1998:

  • Apple seemed to be on a declining trajectory.
  • Google consisted of just a few people working in a garage. (Like Symbian, Google was founded in 1998.)
  • Samsung and LG were known to the Symbian team, but we decided not to include them on our initial list of priority sales targets, in view of their lowly sales figures.

Second, although our predictions of eventual sales figures for Symbian devices were broadly correct – namely 100s of millions – this was the result of two separate mistakes cancelling each other out:

  • We expected to have a higher share of the overall mobile phone market (over 50% – perhaps even approaching 100%).
  • We expected that overall phone market to remain at the level of 100s of millions per annum – we did not imagine it would become as large as a billion per year.

(A smaller-than-expected proportion of a larger-than-expected market worked out at around the same volume of sales.)

Third – and probably most significant for drawing wider lessons – we got the timescales significantly wrong. It took considerably longer than we expected for:

  • The first successful smartphones to become available
  • Next generation networks (supporting high-speed mobile data) to be widely deployed
  • Mobile applications to become widespread.

Associated with this, many pre-existing systems remained in place much longer than anticipated, despite our predictions that they would fail to be able to adapt to changing market demands:

  • RIM sold more and more BlackBerries, despite repeated concerns that their in-house software system would become antiquated.
  • The in-house software systems of major phone manufacturers, such as Nokia’s Series 40, likewise survived long past predicted “expiry” dates.

To examine what’s going on, it’s useful to look in more detail at three groups of factors:

  1. Factors accelerating growth in the smartphone market
  2. Factors restricting growth in the smartphone market
  3. Factors that can overcome the restrictions and enable faster growth.

Having reviewed these factors in the case of smartphone technology, I’ll then revisit the three groups of factors, with an eye to general technology.

3.2 Factors accelerating growth in the smartphone market

The first smartphone sales accelerator is decreasing price. Smartphones increase in popularity because of price reductions. As the devices become less expensive, more and more people can afford them. Other things being equal, a desirable piece of consumer electronics that has a lower cost will sell more.

The underlying cost of smartphones has been coming down for several reasons. Improvements in underlying silicon technology mean that manufacturers can pack more semiconductors on to the same bit of space for the same cost, creating more memory and more processing power. There are also various industry scale effects. Companies who work with a mobile platform over a period of time gain the benefit of “practice makes perfect”, learning how to manage the supply chain, select lower price components, and assemble and manufacture their devices at ever lower cost.

A second sales accelerator is increasing reliability. With some exceptions (that have tended to fall by the wayside), smartphones have become more and more reliable. They start faster, have longer battery life, and need fewer resets. As such, they appeal to ordinary people in terms of speed, performance, and robustness.

A third sales accelerator is increasing stylishness. In the early days of smartphones, people would often say, “These smartphones look quite interesting, but they are a bit too big and bulky for my liking: frankly, they look and feel like a brick.” Over time, smartphones became smaller, lighter, and more stylish. In both their hardware and their software, they became more attractive and more desirable.

A fourth sales accelerator is increasing word of mouth recommendations. The following sets of people have all learned, from their own experience, good reasons why consumers should buy smartphones:

  • Industry analysts – who write reports that end up influencing a much wider network of people
  • Marketing professionals – who create compelling advertisements that appear on film, print, and web
  • Retail assistants – who are able to highlight attractive functionality in devices, at point of sale
  • Friends and acquaintances – who can be seen using various mobile services and applications, and who frequently sing the praises of specific devices.

This extra word of mouth exists, of course, because of a fifth sales accelerator – the increasing number of useful and/or entertaining mobile services that are available. This includes built-in services as well as downloadable add-on services. More and more individuals learn that mobile services exist which address specific problems they experience. This includes convenient mobile access to banking services, navigation, social networking, TV broadcasts, niche areas of news, corporate databases, Internet knowledgebases, tailored educational material, health diagnostics, and much, much more.

A sixth sales accelerator is increasing ecosystem maturity. The ecosystem is the interconnected network of companies, organisations, and individuals who create and improve the various mobile services and enabling technology. It takes time for this ecosystem to form and to learn how to operate effectively. However, in due course, it forms a pool of resources that is much larger than exists just within the first few companies who developed and used the underlying mobile platform. These additional resources provide, not just a greater numerical quantity of mobile software, but a greater variety of different innovative ideas. Some ecosystem members focus on providing lower cost components, others on providing components with higher quality and improved reliability, and yet others on revolutionary new functionality. Others again provide training, documentation, tools, testing, and so on.

In summary, smartphones are at the heart of a powerful virtuous cycle. Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle. Applications and services which prove their value as add-ons for one generation of smartphones become bundled into the next generation. With this extra built-in functionality, the next generation is intrinsically more attractive, and typically is cheaper too. Developers see an even larger market and increase their efforts to supply software for this market.

3.3 Factors restricting growth in the smartphone market

Decreasing price. Increasing reliability. Increasing stylishness. Increasing word of mouth recommendations. Increasingly useful mobile services. Increasing ecosystem maturity. What could stand in the way of these powerful accelerators?


First, there are technical problems with unexpected difficulty. Some problems turn out to be much harder than initially imagined. For example, consider speech recognition, in which a computer can understand spoken input. When Psion planned the Series 5 family of PDAs in the mid 1990s (as successors to the Series 3 family), we had a strong desire to include speech recognition capabilities in the device. Three “dictaphone style” buttons were positioned in a small unit on the outside of the case, so that the device could be used even when the case (a clamshell) was shut. Over-optimistically, we saw speech recognition as a potential great counter to the pen input mechanisms that were receiving lots of press attention at the time, on competing devices like the Apple Newton and the Palm Pilot. We spoke to a number of potential suppliers of voice recognition software, who assured us that suitably high-performing recognition was “just around the corner”. The next versions of their software, expected imminently, would impress us with its accuracy, they said. Alas, we eventually reached the conclusion that the performance was far too unreliable and would remain so for the foreseeable future – even if we went the extra mile on cost, and included the kind of expensive internal microphone that the suppliers recommended. We feared that “normal users” – the target audience for Psion PDAs – would be perplexed by the all-too-frequent inaccuracies in voice recognition. So we took the decision to remove that functionality. In retrospect, it was a good decision. Even ten years later, voice recognition functionality on smartphones generally fell short of user expectations.

Speech recognition is just one example of a deeply hard technical problem, that turned out to take much longer than expected to make real progress. Others include:

  • Avoiding smartphone batteries being drained too quickly, from all the processing that takes place on the smartphone
  • Enabling rapid search of all the content on a device, regardless of the application used to create that content
  • Devising a set of application programming interfaces which have the right balance between power-of-use and ease-of-use, and between openness and security.

Second, there are “chicken-and-egg” coordination problems – sometimes also known as “the prisoner’s dilemma”. New applications and services in a networked marketplace often depend on related changes being coordinated at several different points in the value chain. Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make less sense when viewed individually. For example, successful mobile phones required both networks and handsets. Successful smartphones required new data-enabled networks, new handsets, and new applications. And so on.

Above, I wrote about the potential for “a powerful virtuous cycle”:

Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle.

However, this only works once the various factors are all in place. A new ecosystem needs to be formed. This involves a considerable coordination problem: several different entities need to un-learn old customs, and adopt new ways of operating, appropriate to the new value chain. That can take a lot of time.

Worse – and this brings me to a third problem – many of the key players in a potential new ecosystem have conflicting business models. Perhaps the new ecosystem, once established, will operate with greater overall efficiency, delivering services to customers more reliably than before. However, wherever there are prospects of cost savings, there are companies who potentially lose out – companies who are benefiting from the present high prices. For example, network operators making healthy profits from standard voice services were (understandably) apprehensive about distractions or interference from low-profit data services running over their networks. They were also apprehensive about risks that applications running on their networks would:

  • Enable revenue bypass, with new services such as VoIP and email displacing, respectively, standard voice calls and text messaging
  • Saturate the network with spam
  • Cause unexpected usability problems on handsets, which the user would attribute to the network operator, entailing extra support costs for the operator.

The outcome of these risks of loss of revenue is that ecosystems might fail to form – or, having formed with a certain level of cooperation, might fail to attain deeper levels of cooperation. Vested interests get in the way of overall progress.

A fourth problem is platform fragmentation. The efforts of would-be innovators are spread across numerous different mobile platforms. Instead of a larger ecosystem all pulling in the same direction, the efforts are diffused, with the risk of confusing and misleading participants. Participants think they can re-apply skills and solutions from one mobile product in the context of another, but subtle and unexpected differences cause incompatibilities which can take a lot time to debug and identify. Instead of collaboration effectively turning 1+1 into 3, confusion turns 1+1 into 0.5.

A fifth problem is poor usability design. Even though a product is powerful, ordinary end users can’t work out how to operate it, or get the best experience from it. They feel alienated by it, and struggle to find their favourite functionality in amongst bewildering masses of layered menu options. A small minority of potential users, known as “technology enthusiasts”, are happy to use the product, despite these usability issues; but they are rare exceptions. As such, the product fails to “cross the chasm” (to use the language of Geoffrey Moore) to the mainstream majority of users.

The sixth problem underlies many of the previous ones: it’s the problem of accelerating complexity. Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos can ensue:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window.
  • Smartphone application development may grow in difficulty, as developers need to juggle different programming interfaces and optimisation methods.
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.

3.4 Factors that can overcome the restrictions and enable faster growth

Technical problems with unexpected difficulty. Chicken-and-egg coordination problems. Conflicting business models. Platform fragmentation. Poor usability design. Accelerating complexity. These are all factors that restrict smartphone progress. Without solving these problems, the latent potential of smartphone technology goes unfulfilled. What can be done about them?

At one level, the answer is: look at the companies who are achieving success with smartphones, despite these problems, and copy what they’re doing right. That’s a good starting point, although it risks being led astray by instances where companies have had a good portion of luck on their side, in addition to progress that they merited through their own deliberate actions. (You can’t jump from the observation that company C1 took action A and subsequently achieved market success, to the conclusion that company C2 should also take action A.) It also risks being led astray by instances where companies are temporarily experiencing significant media adulation, but only as a prelude to an unravelling of their market position. (You can’t jump from the observation that company C3 is currently a media darling, to the conclusion that a continuation of what it is currently doing will achieve ongoing product success.) With these caveats in mind, here is the advice that I offer.

The most important factor to overcome these growth restrictions is expertise – expertise in both design and implementation:

  • Expertise in envisioning and designing products that capture end-user attention and which are enjoyable to use again and again
  • Expertise in implementing an entire end-to-end product solution.

The necessary expertise (both design and implementation) spans eight broad areas:

  1. technology – such as blazing fast performance, network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security
  2. ecosystem design – to solve the “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised
  3. business models – identifying new ways in which groups of companies can profit from adopting new technology
  4. community management – encouraging diverse practitioners to see themselves as part of a larger whole, so that they are keen to contribute
  5. user experience – to ensure that the resulting products will be willingly accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts)
  6. agile project management – to avoid excess wasted investment in cases where project goals change part way through (as they inevitably do, due to the uncertain territory being navigated)
  7. lean thinking – including a bias towards practical simplicity, a profound distrust of unnecessary complexity, and a constant desire to identify and deal with bottleneck constraints
  8. system integration – the ability to pull everything together, in a way that honours the core product proposition, and which enables subsequent further evolution.

To be clear, I see these eight areas of expertise as important for all sectors of complex technology development – not just in the smartphone industry.

Expertise isn’t something that just exists in books. It manifests itself:

  • In individual people, whose knowledge spans different domains
  • In teams – where people can help and support each other, playing to everyone’s strengths
  • In tools and processes – which are the smart embodiment of previous generations of expertise, providing a good environment to work out the next generation of expertise.

In all three cases, the expertise needs to be actively nurtured and enhanced. Companies who under-estimate the extent of the expertise they need, or who try to get that expertise on the cheap – or who stifle that expertise under the constraints of mediocre management – are likely to miss out on the key opportunities provided by smartphone technology. (Just because it might appear that a company finds it easy to do various tasks, it does not follow that these tasks are intrinsically easy to carry out. True experts often make hard tasks look simple.)

But even with substantial expertise available and active, it remains essentially impossible to be sure about the timescales for major new product releases:

  • Novel technology problems can take an indeterminate amount of time to solve
  • Even if the underlying technology progresses quickly, the other factors required to create an end-to-end solution can fall foul of numerous unforeseen delays.

In case that sounds like a depressing conclusion, I’ll end this section with three brighter thoughts:

First, if predictability is particularly important for a project, you can increase your chances of your project hitting its schedule, by sticking to incremental evolutions of pre-existing solutions. That can take you a long way, even though you’ll reduce the chance of more dramatic breakthroughs.

Second, if you can afford it, you should consider running two projects in parallel – one that sticks to incremental evolution, and another that experiments with more disruptive technology. Then see how they both turn out.

Third, the relationship between “speed of technology progress” and “speed of product progress” is more complex than I’ve suggested. I’ve pointed out that the latter can lag the former, especially where there’s a shortage of expertise in fields such as ecosystem management and the creation of business models. However, sometimes the latter can move faster than the former. That occurs once the virtuous cycle is working well. In that case, the underlying technological progress might be exponential, whilst the productisation progress could become super-exponential.

3.5 Successes and shortcomings in predicting the future of technology

We all know that it’s a perilous task to predict the future of technology. The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars. These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary. Indeed, history is littered with curious and amusing examples of flawed predictions of the future. Popular science fiction fares no better:

  • The TV series “Lost in space”, which debuted in 1965, featured a manned spacecraft leaving Earth en route for a distant star, Alpha Centauri, on 16 October 1997.
  • Arthur C Clarke’s “2001: a space odyssey”, made in 1968, featured a manned spacecraft flight to Jupiter.
  • Philip K Dick’s novel “Do Androids Dream of Electric Sheep?”, coincidentally also first published in 1968, described a world set in 1992 in which androids (robots) are extremely hard to distinguish from humans. (Later editions of the novel changed the date to 2021 – the date adopted by the film Bladerunner which was based on the novel.)

Forecasts often go wrong when they spot a trend, and then extrapolate it. Projecting trends into the future is a dangerous game:

  • Skyscrapers rapidly increased in height in the early decades of the 20th century. But after the Empire State Building was completed in 1931, the rapid increases stopped.
  • Passenger aircraft rapidly increased in speed in the middle decades of the 20th century. But after Concorde, which flew its maiden flight in 1969, there have been no more increases.
  • Manned space exploration went at what might be called “rocket pace” from the jolt of Sputnik in 1957 up to the sets of footprints on the Moon in 1969-1972, but then came to an abrupt halt. At the time of writing, there are still no confirmed plans for a manned trip to Mars.

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects. Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on. Just because a technology improved in the past, it does not mean there will be sufficient societal motivation to keep on improving it in the future. Technology is not enough. Especially for changes that are complex and demanding, up to six additional criteria need to be satisfied as well:

  1. The technological development has to satisfy a strong human need.
  2. The development has to be possible at a sufficiently attractive price to individual end users.
  3. The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle.
  4. There must be a clear implementation path whereby the eventual version of the technology can be attained through a series of steps that are, individually, easier to achieve.
  5. When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be open (to accept new ideas).
  6. Likewise, the development process must be commercially attractive, or provide some other strong incentive, to encourage the generation of new ideas, and, even more important, to encourage people to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation.

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications. For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son. The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century. The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world! As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As mentioned earlier, an important factor that can allow accelerating technological progress is the establishment of an operational virtuous cycle that provides positive feedback. Here are four more examples:

  1. The first computers were designed on paper and built by hand. Later computers benefited from computer-aided design and computer-aided manufacture. Even later computers benefit from even better computer-aided design and manufacture…
  2. Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly – including more powerful tools…
  3. More powerful hardware enables new software which enables new use cases which demand more innovation in improving the hardware further…
  4. Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…

A well-functioning virtuous cycle makes it more likely that technological progress can continue. But the biggest factor determining whether a difficult piece of progress occurs is often the degree of society’s motivation towards that progress. Investment in ever-faster passenger airlines ceased, because people stopped perceiving that ever-faster airlines were that important. Manned flight to Mars was likewise deemed to be insufficiently important: that’s why it didn’t take place. The kinds of radical technological progress that I discuss in this book are, I believe, all feasible, provided sufficient public motivation is generated and displayed in support of that progress. This includes major enhancements in health, education, clean energy, artificial general intelligence, human autonomy, and human fulfilment. The powerful public motivation will cause society to prioritise developing and supporting the types of rich expertise that are needed to make this technological progress a reality.

3.6 Moore’s Law: A recap

When I started work at Psion, I was given a “green-screen” console terminal, connected to a PDP11 minicomputer running VAX VMS. That’s how I wrote my first pieces of software for Psion. A short while afterwards, we started using PCs. I remember that the first PC I used had a 20MB hard disk. I also remember being astonished to find that a colleague had a hard disk that was twice as large. What on earth does he do with all that disk space, I wondered. But before long, I had a new PC with a larger hard disk. And then, later, another new one. And so on, throughout my 20+ year career in Psion and Symbian. Each time a new PC arrived, I felt somewhat embarrassed at the apparent excess of computing power it provided – larger disk space, more RAM memory, faster CPU clock speed, etc. On leaving Symbian in October 2009, I bought a new laptop for myself, along with an external USB disk drive. That disk drive was two terabytes in size. For roughly the same amount of money (in real terms) that had purchased 20MB of disk memory in 1989, I could now buy a disk that was 100,000 times larger. That’s broadly equivalent to hard disks doubling in size every 15 months over that 20 year period.

This repeated doubling of performance, on a fairly regular schedule, is a hallmark of what is often called “Moore’s Law”, following a paper published in 1965 by Gordon Moore (subsequently one of the founders of Intel). It’s easy to find other examples of this exponential trend within the computing industry. University of London researcher Shane Legg has published a chart of the increasing power of the world’s fastest supercomputers, from 1960 to the present day, along with a plausible extension to 2020. This chart measures the “FLOPS” capability of each supercomputer – the number of floating point (maths) operations it can execute in a second. The values move all the way from kiloFLOPS through megaFLOPS, gigaFLOPS, teraFLOPS, and petaFLOPS, and point towards exaFLOPS by 2020. Over sixty years, the performance improves through twelve and a half orders of magnitude, which is more than 40 doublings. This time, the doubling period works out at around 17 months.

Radical futurist Ray Kurzweil often uses the following example:

When I was an MIT undergraduate in 1965, we all shared a computer that took up half a building and cost tens of millions of dollars. The computer in my pocket today [a smartphone] is a million times cheaper and a thousand times more powerful. That’s a billion-fold increase in the amount of computation per dollar since I was a student.

A billion-fold increase consists of 30 doublings – which, spread out over 44 years from 1965 to 2009, gives a doubling period of around 18 months. And to get the full picture of the progress, we should include one more observation alongside the million-fold price improvement and thousand-fold processing power improvement: the 2009 smartphone is about one hundred thousand times smaller than the 1965 mainframe.

These steady improvements in computer hardware, spread out over six decades so far, are remarkable, but they’re not the only example of this kind of long-term prodigious increase. Martin Cooper, who has a good claim to be considered the inventor of the mobile phone, has pointed out that the amount of information that can be transmitted over useful radio spectrum has roughly doubled every 30 months since 1897, when Guglielmo Marconi first patented the wireless telegraph:

The rate of improvement in use of the radio spectrum for personal communications has been essentially uniform for 104 years. Further, the cumulative improvement in the effectiveness of personal communications total spectrum utilization has been over a trillion times in the last 90 years, and a million times in the last 45 years

Smartphones have benefited mightily from both Moore’s Law and Cooper’s Law. Other industries can benefit in a similar way too, to the extent that their progress can be driven by semiconductor-powered information technology, rather than by older branches of technology. As I’ll review in later chapters, there are good reasons to believe that both medicine and energy are on the point of dramatic improvements along these lines. For example, the so-called Carlson curves (named after biologist Rob Carlson) track exponential decreases in the costs of both sequencing (reading) and synthesising (writing) base pairs of DNA. It cost about $10 to sequence a single base pair in 1990, but this had reduced to just 2 cents by 2003 (the date of the completion of the human genome project). That’s 9 doublings in just 13 years – making a doubling period of around 17 months.

Moore’s Law and Cooper’s Law are far from being mathematically exact. They should not be mistaken for laws of physics, akin to Newton’s Laws or Maxwell’s Laws. Instead, they are empirical observations, with lots of local deviations when progress temporarily goes either faster or slower than the overall average. Furthermore, scientists and researchers need to keep on investing lots of skill, across changing disciplines, to keep the progress occurring. The explanation given on the website of Martin Cooper’s company, ArrayComm, provides useful insight:

How was this improvement in the effectiveness of personal communication achieved? The technological approaches can be loosely categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum.

How much of the improvement can be attributed to each of these categories? Of the million times improvement in the last 45 years, roughly 25 times were the result of being able to use more spectrum, 5 times can be attributed to the ability to divide the radio spectrum into narrower slices — frequency division. Modulation techniques like FM, SSB, time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so. The remaining sixteen hundred times improvement was the result of confining the area used for individual conversations to smaller and smaller areas — what we call spectrum re-use…

Cooper suggests that his law can continue to hold until around 2050. Experts at Intel say they can foresee techniques to maintain Moore’s Law for at least another ten years – potentially longer. In assessing the wider implications of these laws, we need to consider three questions:

  1. How much technical runway is left in these laws?
  2. Can the benefits of these laws in principle be applied to transform other industries?
  3. Will wider system effects – as discussed earlier in this chapter – frustrate overall progress in these industries (despite the technical possibilities), or will they in due course even accelerate the underlying technical progress?

My answers to these questions:

  1. Plenty
  2. Definitely
  3. It depends on whether we can educate, motivate, and organise a sufficient critical mass of concerned citizens. The race is on!

>> Next chapter >>

16 October 2009

Personal announcement: Life beyond Symbian

Filed under: Psion, Symbian, Symbian Foundation — David Wood @ 4:19 pm

I have a personal announcement to make: I’m leaving Symbian.

I’ve greatly enjoyed my work of the last 18 months: helping with the preparations and announcement of the Symbian Foundation, and then serving on its Leadership Team as Catalyst and Futurist.

I’m pleased by how much how has been accomplished in a short space of time.  The transition to full open source is well and truly on the way.  The extended Symbian community will shortly be gathering to exchange news and views of progress and opportunities at this year’s SEE09 event in Earls Court, London.  It will be a very busy event, full of insight and announcements, with (no doubt) important new ideas being hatched and reviewed.

On a personal note, I’m proud of the results of my own work on the Symbian blog, and in building and extending Symbian engagement in China, culminating in the recent press release marking a shared commitment by China Mobile and Symbian.  I’m also honoured to have been at the core of a dynamic and energetic leadership team, providing advice and support behind the scenes.

In many ways, my time in the Symbian Foundation has been a natural extension of a 20 year career with what we now call Symbian platform software (and its 16-bit predecessor): 10 years with PDA manufacturer Psion followed by 10 years on the Leadership Team of Symbian Ltd, prior to the launch of the Symbian Foundation.  In summary, I’ve spent 21 hectic years envisioning, architecting, implementing, supporting, and avidly using smart mobile devices.  It’s been a fantastic experience.

However, there’s more to life than smart mobile devices.  For a number of years, I’ve been nursing a growing desire to explore alternative career options and future scenarios. The milestone of my 50th birthday a few months back has helped to intensify this desire.

Anyone who has dipped into my personal blog or followed my tweets will have noticed my deep interest in topics such as: the future of energy, accelerated climate change, accelerated artificial intelligence, looming demographic changes and the longevity dividend, life extension and the future of medicine, nanotechnology, smart robotics, abundance vs. scarcity, and the forthcoming dramatic societal and personal impacts of all of these transformations.  In short, I am fascinated and concerned about the breakthrough future of technology, as well as by the breakthrough future of smartphones.

It’s time for me to spend a few months investigating if I can beneficially deploy my personal skills in advocacy, analysis, coordination, envisioning, facilitation, and troubleshooting (that is, my skills as a “catalyst and futurist”) in the context of some of these other “future of technology” topics.

I’m keeping an open mind to the outcome of my investigation.  I do believe that I need to step back from employment with the Symbian Foundation in order to give that investigation a proper chance to succeed.  I need to open up time for wide-ranging discussions with numerous interesting individuals and companies, both inside and outside the smartphone industry.  I look forward to finding a new way to balance my passionate support for Symbian and smartphones with my concern for the future of technology.

Over the next few days, I’ll be handing over my current Symbian Foundation responsibilities to colleagues and partners.  I’ll become less active on Symbian blogs, forums, and emails.  For those who wish to bid me “bon voyage”, I’ll be happy to chat over a drink at SEE09 – by which time I will have ceased to be an employee with the Symbian Foundation, and will simply be an enthusiastic supporter and well-wisher.

After I leave Symbian, I’ll still be speaking at conferences from time to time – but no longer as a representative of Symbian.  The good news is that Symbian now possesses a strong range of talented spokespeople who will do a fine job of continuing the open dialog with the wider community.

Many thanks are due to my Symbian Foundation colleagues, especially Executive Director Lee Williams and HR Director Steve Warner, for making this transition as smooth as possible.  It’s been a great privilege to work with this extended team!

To reach me in the future, you can use my new email address, davidw AT deltawisdom DOT com.  My mobile phone number will remain the same as before.

19 June 2008

Seven principles of agile architecture

Filed under: Agile, Symbian — David Wood @ 9:37 pm

Agile software methodologies (associated with names like “Scrum” and “eXtreme Programming”) have historically been primarily adopted within small-team projects. They’ve tended to fare less well on larger projects.

Dean Leffingwell’s book “Scaling Software Agility: Best practices for large enterprises” is the most useful one that I’ve found, on the important topic of how best to apply the deep insights of Agile methodologies in the context of larger development projects. I like the book because it’s clear (easy to read) as well as being profound (well worth reading). I liked the book so much that I invited Dean to come to speak at various training seminars inside Symbian. We’ve learned a great deal from what he’s had to say.

As an active practitioner who carries out regular retrospectives, Dean keeps up a steady stream of new blog articles that capture the evolution of his thinking. Recently, he’s been publishing articles on “Agile architecture”, including a summary article that lists “Seven principles of agile architecture“:

  1. The teams that code the system design the system
  2. Build the simplest architecture that can possibly work
  3. When in doubt, code it out
  4. They build it, they test it
  5. The bigger the system, the longer the runway
  6. System architecture is a role collaboration
  7. There is no monopoly on innovation.

Dean says he’s working on an article that pulls all these ideas together. I’m looking forward to it!

18 June 2008

The dangers of fragmentation

Filed under: fragmentation, Linux, Olswang, Open Source, Symbian — David Wood @ 8:46 am

My comments on mobile Linux fragmentation at the Handsets World event in Berlin were picked up by David Meyer (“Doubts raised over Android fragmentation“) and prompted a response by Andy Rubin, co-founder of Google’s Android team. According to the reports,

On a recent comment by Symbian’s research chief, David Wood, that Android would eventually end up fragmented, Rubin said it’s all part of the open source game.

Raising the example of a carrier traditionally having to wait for a closed platform developer to release the next version of its software to “enable” the carrier to offer new services, Rubin said carriers could just hire a developer internally to speed up that process without waiting any longer.

“If that fragmentation is what [Wood] is talking about, that’s great–let’s do it,” said Rubin.

Assuming these reports are accurate, they fall into the pattern of emphasising the short-term benefits of fragmentation, but de-emphasising the growing downstream compatibility problems of a platform being split into different variants. They make fragmentation sound like fun. But believe me, it’s not!

I noticed the same pattern while watching a panel on Open Source in Mobile at one of the Smartphone Summit events that take place the day before CTIA. The panel moderator posed the question, “Is fragmentation a good or bad thing?” The first few panellists were from consultancies and service providers. “Yes”, they said, smiling, “Fragmentation gives more opportunity for doing things differently – and gives us more work to do.” (I paraphrase, but only slightly.) Then came the turn of a VP from one of the phone manufacturers who have struggled perhaps more than most with the variations and incompatibilities between different mobile Linux software stacks. “Let’s be clear”, came the steely response, “fragmentation is a BAD thing, and we have to solve that problem”.

Luigi Licciardi, EVP of Telecom Italia, made similar remarks at the ‘Open source in Mobile’ conference in Madrid in September 2007. He said that one thing his network operator needs in terms of software platforms which they would consider using in the mobile phones, is ‘backwards compatibility’ – in other words, a certain level of stability. (This sounds simple, but I know from my own experience that backwards compatibility requires deep skill and expertise in the midst of a rapidly changing marketplace.) Moreover, the software platform has to be responsive to the needs of the individual operators: the operator needs to be able to go and talk to a company and say “give us these changes and modifications”. He also said that the platform needs to be open to applications for network connections and end users, but has to be closed to malware. In other words it has got to have a very good security story. (Incidentally, I believe Symbian has uniquely got a very strong security story, with platform security built deep into the operating system.) Finally, he emphasised that “a fragmented Linux is of no interest to operators”.

This topic deserves more attention. Let me share some analysis from a transcript of a talk I gave at the Olswang “Open Source Summit” in London last November:

The point is that there is a great tendency in the mobile phone space for mobile Linux variants to fragment and split. This was first drawn to my attention more than two years ago by Avi Greengart who is a US-based analyst. He said that mobile Linux is the new Unix, meaning that despite the best intentions of all involved, it keeps on going its own separate ways.

So why is that happening? It is happening first of all because fragmentation is easy. This means that you can take the code and do whatever you like with it. But will these changes be brought back inside the main platform? Well I claim that, especially in a fast moving market such as smartphones, integration is hard. The changes tend to be incompatible with each other. Therefore it is my prediction that, on average, mobile Linux will fragment faster than it unifies.

It is true that there are many people who say it is very bad that there are all these different mobile Linux implementations. It is very bad because it has caused great problems for developers: they have to test against so many stacks. These people ask themselves, “Can’t we unify things?” And every few months there is a new group that is formed and says, in effect, “Right, we are going to make a better job of unifying mobile Linux than the last lot, they weren’t doing it fast enough, they weren’t doing it seriously enough, so we are going to change that.” But I see the contrary, that there is a greater tendency to fragment in this space than to unify, and here’s why.

It is always easier and quicker to release a device-specific solution than to invest the extra effort to put that functionality into a reusable platform, and then on into a device. In other words, when you are racing to market, when the market leaders are rushing away from you and leaving more and more clear blue water between you and them, it is much more tempting to say, “well I know we are supposed to be thinking in terms of platform, but just for now I am going to serve the needs of my individual product.”

Interestingly we had the same problem in the early days of Symbian. One of the co-founders of Symbian, Bill Batchelor, coined the phrase “the Symbian Paradox”, which is that we found it hard to put functionality into the platform, rather than just serve it out to eager individual customers via consultancy projects. But we gradually learned how to do that, and we gradually put more and more functionality into the platform, suitable for all customers, and therefore more and more customer projects benefited more widely.

So why is mobile Linux fragmenting in a way that perhaps open source in other areas isn’t fragmenting? First, it is an intense, fast moving industry. Symbian as the market leader, together with our customers, is bringing out significant new functionality every three or four months. So there is no time for other people to take things easy and properly invest in their platforms. They are tempted to cut corners – to the detriment of the platform.

Second, if you look at how some of these consortia are meant to work, they are meant to involve various people contributing source code. If you look at some of their architecture diagrams, you might get one company in say Asia, which is contributing one chunk of software, which is meant to be used by other companies the world over. Well guess what happens in a real life project? Another company, let’s say a company trying to ship a Linux based phone in America, and surprise, surprise, the software doesn’t work, it fails to get FCC approval, it doesn’t meet the network operators’ needs or there are bugs that only show up on the network operators in America. So what do they say? They say to the first group (the people out in Asia) “would you mind stopping what you are doing and come and fix this, we are desperate for this fix for our software”. The group in Asia say, “well we are very sorry, we are struggling hard, and we are behind as well, we would rather prioritise our own projects, if you don’t mind, shipping our own software, debugging it on different networks”.

At this point you may raise the question: isn’t open source meant to be somewhat magical in that you can all just look at it and fix it anyway; you don’t need to get the original person to come and fix it? But here we reach the crux of the matter. The problem is there is just too much code, there are such vast systems, it is not just a few hundred lines, or even a few thousand lines of code, there are hundreds of thousands or even millions of lines of code in these components and they are all interfacing together. So somebody looks at the code and they think, “Oh gosh, it is very complicated”, and they look and they look and they look and eventually they think, “Well if I change this line of code, it will probably work”, but then without realising it they have broken something else. And the project takes ages and ages to progress..

Compare this to the following scenario: some swimmers are so good they can actually swim across the English Channel, they go all the way from England to France. Suppose they now say, “Yes I have cracked swimming, what will I do next? Oh I will swim all the way across the Atlantic, after all, it is just an ocean and I have already swum one ocean, so what is different about another ocean?” Well it is the kind of difference between the places where open source has been doing well already and the broader oceans with all the complications of full telephony in smartphones.

So what happens next in this situation? Eventually one company or another may come up with a fix to the defects they faced. But then they try and put it back in the platform, and the first company very typically disagrees, saying “I don’t like what you have done with our code, you have changed it in a very funny way, it isn’t the way we would have done it”. And so the code fragments – one version with the original company, and the other in the new company. That is how it ends up that mobile Linux fragments more than it unifies.

I say this firstly, because I have contacts in the industry who lead me to believe that this is what happens. Secondly, we have the same pressures inside Symbian, but we have learned how to cope with it. We often get enhancements coming back from one customer engagement project and at first it doesn’t quite fit into the main OS platform, but we have built up the highly specialist skills how to do this integration.

As I mentioned, integration is hard. You need a company that is clearly responsible and capable for that. This company needs to be independent and trustworthy, being motivated – not by any kind of ulterior motive, but by having only one place in the value chain, doing one job only, which is that it is creating large scale customer satisfaction by volume sales of the platform.

16 June 2008

Anticipating the next ten years of smartphone innovation

Filed under: Essay contest, Symbian — David Wood @ 5:17 pm

This June, Symbian is celebrating its tenth anniversary. As someone who has been a core member of Symbian’s executive management team throughout these ten roller-coaster years, I’d like to share some of my personal reflections on the remarkable smartphone innovations that have taken place over that time – and, in that light, to consider what the next ten years may bring.

It was on 24 June 1998 that the formation of Symbian was announced to the world. The industry’s leading phone manufacturers were to cooperate to fund further development of the operating system known at the time as EPOC32 (this name dates from the inception of the OS, four years earlier, inside the UK-based PDA manufacturer Psion). The funding would enable the operating system to power numerous diverse models of advanced mobile phones – known, in virtue of their rich programmability, as “smartphones”. The news echoed far and wide. In time, the funding repaid investors handsomely: more than 200 million Symbian-based smartphones have already been sold, earning our customers substantial profits. It’s not just our direct customers that have benefited: a fertile ecosystem of partner companies is sharing in an ongoing technological and market success.

But there have been many road bumps along the way – and many surprises. Perhaps the biggest surprise was the degree of difficulty in actually bringing smartphones to market. We time and again under-estimated the complexity of the entire evolving smartphone software system – mistakenly thinking that it would take only around 12 months for significant new products to mature, whereas in reality the effort required was often considerably higher. To our dismay, numerous potential world-beating products were cancelled, on account of lengthy gestation periods. Or, when they did reach the market, their window of opportunity had passed, so their sales were disappointing. For each breakthrough Symbian-based phone that set the market alight, there were almost as many others that were shelved, or failed to live up to expectations. For this reason, incidentally, when I see commentators becoming highly excited about the prospects of possible new smartphone operating systems, I prefer to reserve my judgement. I know that, just because an industry giant is behind a new smartphone solution, it does not follow that early expectations will be translated into tangible unit sales. With ever-increasing feature requirements, operator specifications, and usability demands, smartphone software keeps on growing in complexity. It requires tremendous skill to integrate an entire software stack to meet a rapidly evolving target. If you pick a sub-optimal smartphone OS as your starting point, you’ll be storing up more trouble for yourself.

Another surprise was in some of the key characteristics of successful smartphones. In 1998, we failed to anticipate that most mobile phones would eventually contain a high quality digital camera. It was only after several years that we realised that the “top secret” (and therefore rarely discussed) features of forthcoming products from different customers were actually the same – namely an embedded camera application. More recently, the prevalence of smartphones with embedded GPS chips has also been a happy surprise. Mapping and location services are in the process of transforming mobile phones, today, in similar way to their earlier transformation by still and then video cameras. This observation strengthens my faith in the critical importance of openness in a smartphone operating system: the task of the OS provider isn’t to impose a single vision about the future of mobile phones, but is to enable different industry players to experiment, as easily as possible, with bringing their different visions for mobile phones into reality.

As a measure of the progress with smartphone technology, let’s briefly compare the specs of two devices: the Ericsson R380, which was the first successful Symbian-powered smartphone (on sale from September 2000 – and a technological marvel in its day), and the recent best-seller, Nokia’s N95 8GB:

  • The R380 had a black and white touch screen, whereas the N95 screen has 16 million colours
  • The R380 ran circuit switched data over GSM (2G), whereas the N95 runs over HSDPA (3.5G)
  • The R380 supported WAP browsing, whereas the N95 has full-featured web browsing
  • The R380 had only a small number of built-in applications: PIM, and some utilities and games
  • The N95 includes GPS, Bluetooth, wireless LAN, FM radio, a 5 mega-pixel camera, and a set of built-in applications that’s far too long to list here!

Another telling difference between these two time periods is in the number of Symbian smartphone projects in progress (each with significant resources allocated to them). During the first years of Symbian’s existence, the number of different projects could be counted on the fingers of two hands. In contrast, at the end of March 2008, there were no less than 70 distinct smartphone models under development, from all the leading phone manufacturers. That’s a phenomenal pipeline of future market-leading products.

Although smartphones have come a long way in the last ten years, the next ten years are likely to witness even more growth and innovation:

  • Component prices will continue to fall – resulting in smartphones at prices to suit all pockets
  • Quality, performance, and robustness will continue to improve, meaning that the appeal of smartphones extends beyond technology aficionados and early adopters, into the huge mainstream audience of “ordinary users” for whom reliability and usability have pivotal importance
  • Word of mouth will spread the news that phones can have valuable uses other than voice calls and text messages: more and more users are discovering the joys of mobile web interaction, mobile push email, mobile access to personal and business calendars and information, and so on
  • The smartphone ecosystem will continue to devise, develop, and deploy interesting new services for smartphones, addressing all corners of human life and personal need
  • The pipeline of forthcoming new smartphone models will continue to strengthen.

It is no wonder that analysts talk about a time, not so far into the future, when there will be one billion smartphones in use around the world. The software that is at the heart of the majority of these devices will have a good claim to being the most widely used software on the planet. Symbian OS is in the pole position to win that race, but of course, nothing can be taken for granted.

Symbian’s understanding of the probable evolution of smartphones over the decade ahead is guided, first and foremost, by the extraordinary insight we gain from the trusted relationships we have built up and nurtured over many years with the visionaries, leaders, gurus, and countless thoughtful foot soldiers in our customer and partner companies. As the history of Symbian has unfolded, these relationships of “customer intimacy” have deepened and flourished: our customers and partners have seen that we treated their insights and ideas with respect and with due confidentiality – and that has prompted them to share even more of their thinking (their hopes and their fears) about the future of smartphones. In turn, this shapes our extensive roadmap of future enhancements to Symbian OS technology.

To provide additional checks on our thinking about future issues and opportunities for smartphones, Symbian is inaugurating an essay contest, which is open to entries from students at universities throughout the world. Up to ten essays will win a prize of £1000 each – essays need to be submitted before the end of September, and winners will be announced at the Symbian Smartphone Show in October. Essays should address the overall theme of “The next wave of smartphone innovation”. For details of how to enter the contest, see http://www.symbian.com/news/essaycontest/.

As a guide for potential entrants, Symbian has announced a set of six research sub-themes, which are also areas that Symbian believes deserve further investigation in universities or other research institutions:

  1. Device evolution / revolution through 2012-2015: The smartphones of the future are likely to be significantly different from those of today. Although today’s smartphones have tremendous capability, those reaching the market in 2012-2015 are likely to put today’s devices into the shade. What clues are there, about the precise characteristics of these devices?
  2. Improved development and delivery methodologies: The dramatically increasing scale and complexity of smartphone development projects mean that these projects tend to become lengthy and difficult – posing significant commercial challenges.
  3. Success factors for mobile applications and mobile operating systems: What are the factors that significantly impact adoption of mobile software? What can be done to address the factors responsible for low adoption?
  4. Possible breakthrough applications and markets: The search for “killer apps” for smartphones continues. Are there substantial new smartphone application markets waiting to be unlocked by new features at the operating system level?
  5. Possible breakthrough technology improvements: Smartphone applications and services depend on underlying technology, which will come under mounting stress due to increased demands from data, processing, throughput, graphics, and so on.
  6. Improved university collaboration methods: What are the most effective and efficient ways for universities and Symbian to work together?

For lists of questions for each of these sub-themes, see www.symbian.com/news/essaycontest/topics/.

The evolution of the “smartphone” concept itself is particularly important. Whereas successful smartphones have mainly been portrayed so far as “phones first” and as “communications-centric devices”, they are nowadays increasingly being appreciated and celebrated for their computer capabilities. Some of our customers have already been emphasising to end users that their latest devices are “multimedia computers” or even instances of “computer 2.0”. Personally I prefer the name “iPC” (short for “inter-personal computers”) as a likely replacement for “smartphone”. Whereas Symbian’s main technology challenges in the last ten years tended to involve telephony protocols, our main technology challenges of the next ten years will tend to involve concepts from contemporary mainstream computing.

The scale of the future opportunity for iPCs dwarfs that for smartphones, just as the scale of the opportunity for smartphones dwarfed that of the original PDAs. But there’s nothing automatic or easy about this. We’ll have to work just as hard and just as smart in the next ten years, to solve some astonishingly difficult problems, as we’ve ever worked in the past. We’ll need all our wisdom and ingenuity to navigate some radical transitions in both market and technology. Here are just some of the ways in which devices of 2018 will differ from those of 2008.

  • From the WWW to the WWC: Nicholas Carr has written one of the great technology books of 2008. It’s called “The big switch: rewiring the world, from Edison to Google”. With good justification, Carr advances the phrase “world wide computer” to describe what the WWW (world wide web) is becoming: a hugely connected source of massive computing power. Terminals – both PCs and iPCs – are increasingly becoming like sockets, which connect into a grid that provides intelligent services as well as rich data. The consequences of this are hard to foretell, but there will be losers as well as winners. The local intelligence on the iPC will act as a smart portal into a much mightier intelligence that lives on the Internet.
  • Harvesting power from the environment: Efficient usage of limited battery power has been a constant hallmark of Symbian software. With ever greater bandwidth and faster processing speeds, the demands on batteries will become even more pressing. Future iPCs might be able to sidestep this challenge by drawing power from their environment. For example, the BBC recently reported how a contraption connected to a person’s knee can generate enough electricity, reasonably unobtrusively, from just one minute of walking, to power a present-day mobile phone for 30 minutes. Ultra-thin nano-materials that convert ambient light into electricity are another possibility.
  • New paradigms of usability: Given ever larger numbers of applications and greater functionality, no system of hierarchical menus is going to be able to provide users with an “intuitive” or “obvious” guide to using the device. It’s like the way the original listing “Jerry’s Guide to the World Wide Web” – which formed a hierarchically organised set of links, known as “Yahoo” – became replaced by search engines as the generally preferred entry point to the ever richer variety of web pages. For this reason, UIs on iPCs look likely to become driven by intelligent front-end search engines, which respond to user queries by offering seamless choices between both offline and online functionality on their devices. Smart search will be supported by smart recommendations.
  • Short-cutting screen and keyboard: Another drawback of present day smartphones is the relatively fiddly nature of screen and keyboard. How much more convenient if the information in the display could somehow be conveyed directly to the biological brain of the user – and likewise if detectors of brain activity could convert thought patterns into instructions transmitted to the iPC. It sounds mind-boggling, and perhaps that’s what it is, in a literal sense. Nano-technology could make this a reality sooner than we imagine.

If some of these thoughts sparked your interest, I suggest that you bookmark the dates 21-22 October in your diary. That’s when Symbian will bring a host of ecosystem experts together, at the free-to-attend Symbian Smartphone Show, in London. It will be your chance to hear 10 keynote presentations from major industry figures and over 60 seminars led by marketplace experts. You’ll be able to network with over 4000 representatives from major handset vendors, content providers, network operators, and developers. To register, visit smartphoneshow.com. Much of the discussion will focus on the theme, “The next wave of smartphone innovation”. Your contributions will be welcome!

13 June 2008

It was twenty years ago, today

Filed under: Psion, Symbian — David Wood @ 7:34 am

13th June 1988 – twenty years ago today – was the day I started work at Psion. I arrived at the building at 17 Harcourt Street, with its unimpressive facade that led most visitors to wonder whether they had come to the wrong place. When the photo on the left was taken, the premises were used by Symbian, and a “Symbian” sign had been affixed outside. But on my first visits, I noticed no signage at all – although I later discovered the letters of the word “Psion” barely visible in faded yellow paint.

Unimpressive from the outside, the building looked completely different on the inside. Everyone joked about the “tardis effect” – since it seemed impossible for such a small exterior to front a considerably larger interior. In fact, Psion had constructed a set of offices running behind several of the houses in the street – but planning regulations had prevented any change in the house fronts themselves. Apparently, as grade two listed buildings, their original exteriors could not be altered. Or so the story went.

I worked under the guidance of Richard Harrison and Charles Davies on software to be included in a word processor application on Psion’s forthcoming “Mobile Computer” laptop device. My very first programming task was an optimised Find routine. After two weeks, I found myself thinking to myself, “Don’t these people realise I’m capable of working harder?” But I soon had more than enough tough software tasks to handle, and I’ve spent the next twenty years very far from a state of boredom. On the contrary, it’s been a roller-coaster adventure.

Back in 1988, the software development team in Harcourt Street had fewer than 20 people in it. Eight years later, when Psion Software was formed as a separate business unit, there were 88 in the team – which, by that time, also occupied floors in the nearby Sentinel House. Two more years saw the headcount grow to 155 by the time Psion Software turned into Symbian (24 June 1998). Today, our headcount is around 1600. It’s a growth I could not imagine during my first few years of work. Nor could I imagine that descendants of the software from the venerable “Mobile Computer” (MC400) would be powering hundreds of millions of smartphones worldwide.

(You can read more about the long and interesting evolution of Psion’s software team, in my book “Symbian for software leaders: principles of successful smartphone development projects“.)

10 June 2008

Symbian Insight

Filed under: insight, Symbian — David Wood @ 5:07 pm

From Sept 2005 to Nov 2006 I wrote 13 articles under the heading “Symbian Insight” which were published on http://www.symbian.com/.

These are still available at http://www.symbian.com/symbianos/insight/index.html.

« Newer Posts

Blog at WordPress.com.