dw2

10 May 2015

When the future of smartphones was in doubt

It’s hard to believe it now. But ten years ago, the future of smartphones was in doubt.

At that time, I wrote these words:

Smartphones in 2005 are roughly where the Internet was in 1995. In 1995, there were, worldwide, around 20-40 million users of the Internet. That’s broadly the same number of users of smartphones there are in the world today. In 1995, people were debating the real value of Internet usage. Was it simply an indulgent plaything for highly technical users, or would it have lasting wider attraction? In 2005, there’s a similar debate about smartphones. Will smartphones remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

That was the opening paragraph in an essay which the Internet site Archive.org has preserved. The original location for the essay, the Symbian corporate website, has long since been retired, having been absorbed inside Nokia infrastructure in 2009 (and, perhaps, being absorbed in turn into Microsoft in 2014).

Symbian Way Back

The entire essay can be found here, warts and all. That essay was the first in a monthly series known as “David Wood Insight” which extended from September 2005 to September 2006. (The entire set still exists on Archive.org – and, for convenience, I’ve made a copy here.)

Ten years later, it seems to me that wearable computers in 2015 are roughly where smartphones were in 2005 (and where the Internet was in 1995). There’s considerable scepticism about their future. Will they remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

Some commentators look at today’s wearable devices, such as Google Glass and Apple Watch, and express disappointment. There are many ways these devices can be criticised. They lack style. They lack “must have” functionality. Their usability leaves a lot to be desired. Battery life is too short. And so on.

But, like smartphones before them – and like the world-wide web ten years earlier – they’re going to get much, much better as time passes. Positive feedback cycles will ensure that happens.

I share the view of Augmented Reality analyst Ori Inbar, who wrote the following a few months ago in an updated version of his “Smart Glasses Market Report”:

When contemplating the evolution of technology in the context of the evolution of humanity, augmented reality (AR) is inevitable.

Consider the innovation cycles of computing from mainframes, to personal computers, to mobile computing, to wearables: It was driven by our need for computers to get smaller, better, and cheaper. Wearables are exactly that – mini computers on track to shrink and disappear on our bodies. In addition, there is a fundamental human desire for larger and sharper displays – we want to see and feel the world at a deeper level. These two trends will be resolved with Augmented Reality; AR extends our natural senses and will become humans’ primary interface for interaction with the world.

If the adoption curve of mobile phones is to repeat itself with glasses – within 10 years, over 1 billion humans will be “wearing.”

The report is packed with insight – I fully recommend it. For example, here’s Ori’s depiction of four waves of adoption of smart glasses:

Smart Glasses Adoption

(For more info about Augmented Reality and smart glasses, readers may be interested in the forthcoming Augmented World Expo, held 8-10 June at the Santa Clara Convention Centre in Silicon Valley.)

What about ten more years into the future?

All being well, here’s what I might be writing some time around 2025, foreseeing the growing adoption of yet another wave of computers.

If 1995-2005 saw the growth of desktop and laptop computers and the world wide web, 2005-2015 saw the growing ubiquity of smartphones, and 2015-2025 will see the triumph of wearable computers and augmented reality, then 2025-2035 is likely to see the increasingly widespread usage of nanobots (nano-computers) that operate inside our bodies.

The focus of computer innovation and usage will move from portables to mobiles to wearables to insideables.

And the killer app of these embedded nanobots will be internal human enhancement:

  • Biological rejuvenation
  • Body and brain repair
  • Body and brain augmentation.

By 2025, these applications will likely be in an early, rudimentary state. They’ll be buggy, irritating, and probably expensive. With some justification, critics will be asking: Will nanobots remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

7 September 2014

Beyond ‘Smartphones and beyond’

You techno-optimists don’t understand how messy real-life projects are. You over-estimate the power of technology, and under-estimate factors such as sociology, psychology, economics, and biology – not to mention the cussed awkwardness of Murphy’s Law.

That’s an example of the kind of retort that has frequently come to my ears in the last few years. I have a lot of sympathy for that retort.

I don’t deny being an optimist about what technology can accomplish. As I see things:

  • Human progress has taken place by the discovery and adoption of engineering solutions – such as fire, the wheel, irrigation, sailing ships, writing, printing, the steam engine, electricity, domestic kitchen appliances, railways and automobiles, computers and the Internet, plastics, vaccinations, anaesthetic, contraception, and better hygiene
  • Forthcoming technological improvements can propel human experience onto an even higher plane – with our minds and bodies both being dramatically enhanced
  • As well as making us stronger and smarter, new technology can help us become kinder, more collaborative, more patient, more empathetic, less parochial, and more aware of our cognitive biases and blindspots.

But equally, I see lots of examples of technology failing to live up to the expectations of techno-optimists. It’s not just that technology is a two-edged sword, and can scar as well as salve. And it’s not just that technology is often mis-employed in search of a “techno-solution” when a piece of good old-fashioned common sense could result in a better approach. It’s that new technologies – whether ideas for new medical cures, new sustainable energy sources, improved AI algorithms, and so on – often take considerably longer than expected to create useful products. Moreover, these products often have weaker features or poorer quality than anticipated.

Here’s an example of technology slowdown. A 2012 article in Nature coined the clever term “Eroom’s Law” to describe a steady decline in productivity of R&D research in new drug discovery:

Diagnosing the decline in pharmaceutical R&D efficiency

Jack W. Scannell, Alex Blanckley, Helen Boldon & Brian Warrington

The past 60 years have seen huge advances in many of the scientific, technological and managerial factors that should tend to raise the efficiency of commercial drug research and development (R&D). Yet the number of new drugs approved per billion US dollars spent on R&D has halved roughly every 9 years since 1950, falling around 80-fold in inflation-adjusted terms.

In other words, although the better-known Moore’s Law describes a relatively steady increase in computational power, Eroom’s Law describes a relatively steady decrease in the effectiveness of research and development within the pharmaceutical industry. By the way, Eroom isn’t a person: it’s Moore spelt backwards.

The statistics are bleak, as can be seen in a graph from Derek Lowe’s In the pipeline blog:

R&D trend

But despite this dismal trend, I still see plenty of reason for measured optimism about the future of technology. That’s despite the messiness of real-world projects, out-dated regulatory and testing systems, perverse incentive schemes, institutional lethargy, and inadequate legacy platforms.

This measured optimism comes to the surface in the later stages of the book I have just e-published, at the end of a two-year period of writing it. The book is entitled Smartphones and beyond: lessons from the remarkable rise and fall of Symbian.

As I write in the opening chapter of that book (an excerpt is available online):

The story of the evolution of smartphones is fascinating in its own right – for its rich set of characters, and for its colourful set of triumphs and disasters. But the story has wider implications. Many important lessons can be drawn from careful review of the successes and, yes, the failures of the smartphone industry.

When it comes to the development of modern technology, things are rarely as simple as they first appear. Some companies bring great products to the market, true. These companies are widely lauded. But the surface story of winners and losers can conceal many twists and turns of fortune. Behind an apparent sudden spurt of widespread popularity, there frequently lies a long gestation period. The eventual blaze of success was preceded by the faltering efforts of many pioneers who carved new paths into uncertain terrain. The steps and missteps of these near-forgotten pioneers laid the foundation for what was to follow.

So it was for smartphones. It is likely to be the same with many of the other breakthrough technologies that have the potential to radically transform human experience in the decades ahead. They are experiencing their missteps too.

I write this book as an ardent fan of the latent power of modern technology. I’ve seen smartphone technology playing vital roles in the positive transformation of human experience, all over the world. I expect other technologies to play even more radical roles in the near future – technologies such as wearable computing, 3D printing, synthetic biology, nanotechnology, neuro-enhancement, rejuvenation biotech, artificial intelligence, and next generation robotics. But, as with smartphones, there are likely to be many disappointments en route to eventual success. Indeed, even the “eventual success” cannot be taken for granted.

General principles about the progress of complex technology emerge from reflecting on the details of actual examples. These details – the “warts and all”, to use the phrase attributed to Oliver Cromwell – can confound naive notions as to how complex technology should be developed and applied. As I’ll show from specific examples in the chapters ahead, the details show that failure and success often co-exist closely within the same project. A single project often contains multiple layers, belonging to numerous different chains of cause and effect.

It is my sincere hope that an appreciation of real-world examples of these multiple layers of smartphone development projects will enable a better understanding of how to guide the future evolution of other forms of smart technology. I’ll describe what I call “the core smartphone skillset”, comprising excellence in the three dimensions of “platforms”, “marketing”, and “execution”. To my mind, these are the key enablers of complex technological progress. These enablers have a critical role to play for smartphones, and beyond. Put together well, these enablers can climb mountains.

I see the core smartphone skillset as having strong applicability in wider technological areas. That skillset provides the basis for overcoming the various forms of inertia which are holding back the creation of important new solutions from emerging technologies. The existence of that skillset underlies my measured optimism in the future.

But there’s nothing inevitable about how things will turn out. The future holds many potential scenarios, with varying degrees of upside and downside. The question of which scenarios will become actual, depends on inspired human vision, choice, action, and follow-through. Fortune sometimes hinges on the smallest of root causes. Effects can then cascade.

Hits and misses

As well as the description of the core smartphone skillset” – which I see as having strong applicability in wider technological areas – the book contains my thoughts as the things that Symbian did particularly well over the years, resulting in it becoming the leading smartphone operating system for many years in the first decade of this century:

  1. Investors and supporters who were prepared to take a long-term view of their investments
  2. Regular deliveries against an incremental roadmap
  3. Regularly taking the time to improve the architecture of the software and the processes by which it was delivered
  4. High calibre software development personnel
  5. Cleanly executed acquisitions to boost the company’s talent pool
  6. Early and sustained identification of the profound importance of smartphones
  7. Good links with the technology foresight groups and other roadmap planning groups within a range of customers
  8. A product that was open to influence, modification, and customisation by customers
  9. Careful attention given to enabling an ecosystem of partners
  10. An independent commercial basis for the company, rather than it being set up as a toothless “customers’ cooperative”
  11. Enabling competition.

Over the course of that time, Symbian:

  • Opened minds as to what smartphones could accomplish. In particular, people realised that there was much more they could do with mobile phones, beyond making phone calls. This glimpse encouraged other companies to enter this space, with alternative smartphone platforms that achieved, in the end, considerably greater success
  • Developed a highly capable touch UI platform (UIQ), years before Android/iPhone
  • Supported a rich range of different kinds of mobile devices, all running versions of the same underlying software engine; in particular, Symbian supported the S60 family of devices with its ergonomically satisfying one-handed operating mode
  • Achieved early demonstrations of breakthrough capabilities for mobile phones, including streaming multimedia, smooth switching between wifi and cellular networks, maps with GPS updates, the use of a built-in compass and accelerometer, and augmented reality – such as in the 2003 “Mozzies” (“Mosquitos”) game for the Siemens SX1
  • Powered many ground-breaking multimedia smartphones, imaging smartphones, business smartphones, and fashion smartphones
  • Achieved sales of some 500 million units – with the majority being shipped by Nokia, but with 40 million being shipped inside Japan from 2003 onwards, by Fujitsu, Sharp, Mitsubishi, and Sony Ericsson
  • Held together an alliance of competitors, among the set of licensees and partners of Symbian, with the various companies each having the opportunity to benefit from solutions initially developed with some of their competitors in mind
  • Demonstrated that mobile phones could contain many useful third party applications, without at the same time becoming hotbeds of viruses
  • Featured in some of the best-selling mobile phones of all time, up till then, such as the Nokia 5230, which sold 150 million units.

Alongside the list of “greatest hits”, the book also contains a (considerably longer) list of “greatest misses”, “might-have-beens”, and alternative histories. The two lists are distilled from wide-ranging “warts and all” discussions in earlier chapters of the book, featuring many excerpts from my email and other personal archives.

LFS cover v2

To my past and present colleagues from the Symbian journey, I offer my deep thanks for all their contributions to the creation of modern smartphones. I also offer my apologies for cases when my book brings back memories of episodes that some participants might prefer to forget. But Symbian’s story is too important to forget. And although there is much to regret in individual actions, there is much to savour in the overall outcome. We can walk tall.

The bigger picture now is that other emerging technology sectors risk repeating the stumbles of the smartphone industry. Whereas the smartphone industry recovered from its early stumbles, these other industries might not be so fortunate. They may die before they get off the ground. Their potential benefits might remain forever out of grasp, or be sorely delayed.

If the unflattering episodes covered in Smartphones and beyond can help increase the chance of these new technology sectors addressing real human need quickly, safely, and fully, then I believe it will be worth all the embarrassment and discomfort these episodes may cause to Symbian personnel – me included. We should be prepared to learn from one of the mantras of Silicon Valley: “embrace failure”. Reflecting on failure can provide the launchpad for greater future success, whether in smartphones, or beyond.

Early reviewers of the book have commented that the book is laden with lessons, from the pioneer phase of the smartphone industry, for the nascent technology sectors where they are working – such as wearable computing, 3D printing, social robots, and rejuvenation biotechnology. The strength of these lessons is that they are presented, in this book, in their multi-dimensional messiness, with overlapping conflicting chains of cause and effect, rather than as cut-and-dried abstracted principles.

In that the pages of Smartphones and beyond, I do choose to highlight some specific learnings from particular episodes of smartphone success or smartphone failure. Some lessons deserve to be shouted out. For other episodes, I leave it to readers to reach their own conclusions. In yet other cases, frankly, it’s still not clear to me what lessons should be drawn. Writers who follow in my tracks will no doubt offer their own suggestions.

My task in all these cases is to catalyse a discussion, by bringing stories to the table that have previously lurked unseen or under-appreciated. My fervent hope is that the discussion will make us all collectively wiser, so that emerging new technology sectors will proceed more quickly to deliver the profound benefits of which they are capable.

Some links

For an extended series of extracts from the different chapters in Smartphones and beyond, see the official website for the book.

The book is available for Kindle download from Amazon: UK site and International (US) site.

  • Note that readers without Kindle devices can read the book on a convenient app on their PC or tablet (or smartphone!) – these apps are freely available.

I haven’t created a hard-copy print version. The book would need to be split into three parts to make it physically convenient. Far better, in my view, to be able to carry the book on a light electronic device, with “search” and “bookmark” facilities that very usefully augment the reading experience.

Opportunities to improve

Smartphones and beyond no doubt still contains a host of factual mistakes, errors in judgement, misattributions, personal biases, blind spots, and other shortcomings. All these faults are the responsibility of the author. To suggest changes, either in an updated edition of this book or in some other follow-up project, please get in touch.

Where the book includes copies of excerpts from Internet postings, I have indicated the online location where the original article could be found at the time of writing. In case an article has moved or been deleted since then, it can probably be found again via search engines or the Internet archive, https://archive.org/. If I have inadvertently failed to give due credit to an original writer, or if I have included more text than the owner of the original material wishes, I will make amends in a later edition, upon reasonable request. Quoted information where no source is explicitly indicated is generally taken from copies of my emails, memos in my electronic diary, or other personal archives.

One of the chapters of this book is entitled “Too much openness”. Some readers may feel I have, indeed, been too open with some of the material I have shared. However, this material is generally at least 3-5 years old. Commercial lines of business no longer depend on it remaining secret. So I have applied a historian’s licence. We can all become collectively wiser by discussing it now.

Footnote

Finally, one other apology is due. As I’ve given my attention over the last few months to completing Smartphones and beyond, I’ve deprioritised many other tasks, and have kept colleagues from various important projects waiting for longer than they expected. I can’t promise that I’ll be able to pick up all these other pieces quickly again – that kind of overcommitment is one of the failure modes discussed throughout Smartphones and beyond. But I feel like I’m emerging for a new phase of activity – “Beyond ‘Smartphones and Beyond'”.

To help transition to that new phase, I’ve moved my corporate Delta Wisdom website to a new format (WordPress), and rejigged what had become rather stale content. It’s time for profound change.

Banner v6

 

6 September 2014

Smartphones and the mass market: the view from 2005

Filed under: insight, openness, smartphones, Symbian — Tags: , , — David Wood @ 7:07 am

The following article was originally published in the “David Wood Insight” series on Symbian’s corporate website, on 11 Sept 2005 (the first article in that series). I’m re-posting it here now since:

  • It’s one of a number of pages in an old website of mine that I am about to retire – so the article needs a new home
  • The message is aligned with many that are included in my book “Smartphones and beyond” that was published earlier this week.

Smartphones and the mass market

Smartphones in 2005 are roughly where the Internet was in 1995. In 1995, there were, worldwide, around 20-40 million users of the Internet. That’s broadly the same number of users of smartphones there are in the world today. In 1995, people were debating the real value of Internet usage. Was it simply an indulgent plaything for highly technical users, or would it have lasting wider attraction? In 2005, there’s a similar debate about smartphones. Will smartphones remain the preserve of a minority of users, or will they demonstrate mass-market appeal?

Personally, I have no doubt as to the answer. Smartphones are for all. Smartphones – the rapidly emerging new category of advanced computer-based programmable mobile phones – will appeal to all users of mobile phones worldwide. Smartphones are built from highly advanced technology, but they won’t require a highly advanced understanding of technology in order to use them. You won’t need to be a computer whiz kid or the neighbourhood geek to get real value from a smartphone. Nor will you need a huge income to afford one. Smartphones will help us all to keep in better touch with the friends and colleagues and information and discussions and buzz that are important to us, and they are opening up new avenues for entertainment, education, and enterprise alike. Smartphones will help us all to work hard and play hard. And in line with their name, smartphones will also help us to work smart and play smart.

Smartphones differ from ordinary mobile phones in two fundamental ways: how they are built, and what they can do. The way they’re built – using open systems to take advantage of the skills, energy, and innovation of numerous companies from a vast range of industries – means that smartphones extend the phenomenal track record of mobile phones by improving constantly and rapidly, year by year. As for what they can do – in line with the “phone” part of their name, smartphones provide all the capabilities of ordinary mobile phones, in a particularly user-friendly style – but that’s only the start. In addition, smartphones increasingly use their computer-brains and network-connectivity to:

  • Excel at all sorts of communication – instant messaging, email, video conferencing, and more
  • Help us to organise our to-do lists, ideas, calendars, contacts, expenses, and finances
  • Boost our effectiveness in our business life – connecting us smoothly into corporate data systems
  • Entertain us with huge libraries of first-rate music, mobile TV, social networking, and games
  • Guide us around the real world, with maps and location-based services, so we never get lost again
  • Subsume our keys, ID cards, tickets, and wallets – so we can leave these old-world items at home
  • Connect us into online information banks covering every topic under the sun.

In short, smartphones are rapidly becoming our preferred mobile gateway into the ever growing, ever more important digital universe.

In 1995, some people wondered if the Internet would ever really be “useful” (as opposed to a passing fad). Today, you may wonder if mobile access to the Internet will ever really be useful. But if you look at what smartphone users are already able to do, you’ll soon see the benefit. If it’s valuable to you to be able to access bbcnews.com or amazon.com or ebay.com or betfair.com or imdb.com or google.com or wikipedia.org (etc) from your desktop PC, you’ll often find it equally valuable to check these sites when you’re away from your desktop. Because you’ll be carrying your smartphone with you, almost everywhere you go, you’ll have the option to keep in touch with your digital universe, whenever it suits you.

Crucial to this increase in value is the steady set of remarkable improvements that have taken place for both output and input mechanisms on smartphones. Screens have become clearer, larger, sharper, and more colourful. Intelligent handwriting recognition systems, word-completion systems, multi-way jog-dials, Bluetooth keyboards, and ingenious folding and twisting mechanisms, mean that it’s easier than ever before to enter data into smartphones. And faster networks, more powerful on-board processors, and more sophisticated software, mean that “www” on a smartphone no longer means “world wide wait” but rather “world wide wow“.

In parallel, costs are dipping, further and further. In part, this is due to Moore’s Law, which summarises the steady technological improvements in the design and manufacture of integrated circuits and memory chips. But in large part, it’s also due to the dramatic “learning effects” which can take place when world-class companies go through several rounds of finding better and better ways to manufacture their smartphone products. In turn, the magnitude of these “learning effects” depends on the open nature of the smartphone industry. Here, the word “open” has the following meanings:

  • Programmable: the intelligence and power that is in a smartphone can be adapted, extended, and enhanced by add-on applications and services, which tap into the underlying richness of the phone to produce powerful new functionality
  • Interchangeable: services that are designed for use in one smartphone can be deployed on other smartphones as well, from different manufacturers (despite the differences between these smartphones), with minimal (often zero) changes; very importantly, this provides a better incentive to companies to invest the effort to create these new services
  • Collaborative: the process of creating and evolving smartphone products benefits from the input and ideas of numerous companies and individuals; for example, the manufacturers of the second generation of a given smartphone can build in some of the unexpectedly successful applications that were designed by previously unknown companies as add-ons to the first generation of that smartphone
  • Open-minded: the companies who create smartphones have their own clear ideas about how smartphones should operate and what they should contain, but newcomers have ample means and encouragement to introduce different concepts – the industry is ready to accept new ideas
  • Free-flowing: the success of a company in the smartphone industry is substantially determined by its skills with innovation, technology, marketing, and operations, rather than any restrictive contractual lock-ins or accidents of location or history.

In all these cases, the opposite to “open” is “closed”. More specifically, the opposite of the successful smartphone industry would see:

  • Fixed functionality, that changes only slowly and/or superficially
  • Non-standard add-ons, that are each restricted to a small subset of phones
  • Overly competitive companies, whose fierce squabbles would destroy the emerging market before it has time to take root
  • Closed-minded companies who are misguidedly convinced that they have some kind of divine right to act as “benign dictators” for the sake of the industry
  • Bottlenecks and chokes that strangle or restrict innovation.

Foreseeing the risks of a closed approach to smartphone development, the mobile phone industry came together to create Symbian, seven years ago. The name “Symbian” is derived from the biological term “symbiosis”, emphasising the positive aspects of collaboration. Symbian’s motto is “cooperate before competing”. It’s no surprise that the vast majority of today’s smartphones utilise Symbian OS.

The volumes of smartphones in circulation are already large enough to trigger a tipping point – more and more industry players, across diverse fields, are choosing Symbian OS to deploy their new solutions. And at the same time as manufacturers are learning how to provide smartphone solutions ever more affordably, users are learning (and then sharing) surprising new ways they can take advantage of the inner capability and richness of their smartphones. It’s a powerful virtuous cycle. That’s the reason why each new generation of smartphone product has a wider appeal.

Footnote (2014): The site http://www.symbian.com has long since been decommissioned, but some of its content can be retrieved from archive.org. After some sleuthing, I tracked down a copy of the above article here.

27 September 2013

Technology for improved collaborative intelligence

Filed under: collaboration, Hangout On Air, intelligence, Symbian — David Wood @ 1:02 pm

Interested in experiences in using Google Hangout On Air, as a tool to improve collaborative intelligence? Read on.

Google’s Page Rank algorithm. The Wikipedia editing process. Ranking of reviewers on Amazon.com. These are all examples of technology helping to elevate useful information above the cacophony of background noise.

To be clear, in such examples, insight doesn’t just come from technology. It comes from a combination of good tools plus good human judgement – aided by processes that typically evolve over several iterations.

For London Futurists, I’m keen to take advantage of technology to accelerate the analysis of radical scenarios for the next 3-40 years. One issue is that the general field of futurism has its own fair share of background noise:

  • Articles that are full of hype or sensationalism
  • Articles motivated by commercial concerns, with questionable factual accuracy
  • Articles intended for entertainment purposes, but which end up overly influencing what people think.

Lots of people like to ramp up the gas while talking about  the future, but that doesn’t mean they know what they’re talking about.

I’ve generally been pleased with the quality of discussion in London Futurists real-life meetings, held (for example) in Birkbeck College, Central London. The speaker contributions in these meetings are important, but the audience members collectively raise a lot of good points too. I do my best to ‘referee’ the discussions, in a way that a range of opinions have a chance to be aired. But there have been three main limitations with these meetups:

  1. Meetings often come to an end well before we’ve got to the bottom of some of the key lines of discussion
  2. The insights from individual meetings can sometimes fail to be taken forward into subsequent meetings – where the audience members are different
  3. Attendance is limited to people who live near to London, and who have no other commitments when the meetup is taking place.

These limitations won’t disappear overnight, but I have plans to address them in stages.

I’ve explained some of my plans in the following video, which is also available at http://londonfuturists.com/2013/08/30/introducing-london-futurists-academy/.

As the video says, I want to be able to take advantage of the same kind of positive feedback cycles that have accelerated the progress of technology, in order to accelerate in a similar way the generation of reliable insight about the future.

As a practical step, I’m increasingly experimenting with Google Hangouts, as a way to:

  • Involve a wider audience in our discussions
  • Preserve an online record of the discussions
  • Find out, in real-time, which questions the audience collectively believes should be injected into a conversation.

In case it helps others who are also considering the usage of Google Hangouts, here’s what I’ve found out so far.

The Hangouts are a multi-person video conference call. Participants have to log in via one of their Google accounts. They also have to download an app, inside Google Plus, before they can take part in the Hangout. Google Plus will prompt them to download the app.

The Hangout system comes with its own set of plug-in apps. For example, participants can share their screens, which is a handy way of showing some PowerPoint slides that back up a point you are making.

By default, the maximum number of attendees is 10. However, if the person who starts the Hangout has a corporate account with Google (as I have, for my company Delta Wisdom), that number can increase to 15.

For London Futurists meetings, instead of a standard “Hangout”, I’m using “Hangouts On Air” (sometime abbreviated as ‘HOA’). These are started from within their own section of the Google Plus page:

  • The person starting the call (the “moderator”) creates the session in a “pre-broadcast” state, in which he/she can invite a number of participants
  • At this stage, the URL is generated, for where the Hangout can be viewed on YouTube; this vital piece of information can be published on social networking sites
  • The moderator can also take some other pre-broadcast steps, such as enabling the “Questions” app (further mentioned below)
  • When everyone is ready, the moderator presses the big red “Start broadcast” button
  • A wide audience is now able to watch the panellists discussion via the YouTube URL, or on the Google Plus page of the moderator.

For example, there will be a London Futurists HOA this Sunday, starting 7pm UK time. There will be four panellists, plus me. The subject is “Projects to accelerate radical healthy longevity”. The details are here. The event will be visible on my own Google Plus page, https://plus.google.com/104281987519632639471/posts. Note that viewers don’t need to be included in any of the Circles of the moderator.

As the HOA proceeds, viewers typically see the current speaker at the top of the screen, along with the other panellists in smaller windows below. The moderator has the option to temporarily “lock” one of the participants into the top area, so that their screen has prominence at that time, even though other panellists might be speaking.

It’s good practice for panellists to mute their microphones when they’re not speaking. That kind of thing is useful for the panellists to rehearse with the moderator before the call itself (perhaps in a brief preview call several days earlier), in order to debug connectivity issues, the installation of apps, camera positioning, lighting, and so forth. Incidentally, it’s best if there’s a source of lighting in front of the speaker, rather than behind.

How does the audience get to interact with the panellists in real-time? Here’s where things become interesting.

First, anyone watching via YouTube can place text comments under the YouTube window. These comments are visible to the panellists:

  • Either by keeping an eye on the same YouTube window
  • Or, simpler, within the “Comment Tracker” tab of the “Hangout Toolbox” app that is available inside the Hangout window.

However, people viewing the HOA via Google Plus have a different option. Provided the moderator has enabled this feature before the start of the broadcast, viewers will see a big button inviting them to ask a question, in a text box. They will also be able to view the questions that other viewers have submitted, and to give a ‘+1’ thumbs up endorsement.

In real-time, the panellists can see this list of questions appear on their screens, inside the Hangout window, along with an indication of how many ‘+1′ votes they have received. Ideally, this will help the moderator to pick the best question for the panel to address next. It’s a small step in the direction of greater collaborative intelligence.

At time of writing, I don’t think there’s an option for viewers to downvote each others’ questions. However, there is an option to declare that a question is spam. I expect the Google team behind HOA will be making further enhancements before long.

This Questions app is itself an example of how the Google HOA technology is improving. The last time I ran a HOA for London Futurists, the Questions apps wasn’t available, so we just used the YouTube comments mechanism. One of the panellists for that call, David Orban, suggested I should look into another tool, called Google Moderator, for use in a subsequent occasion. I took a look, and liked what I saw, and my initial announcement of my next HOA (the one happening on Sunday) mentioned that I would be using Google Moderator. However, as I said, technology moves on quickly. Giulio Prisco drew my attention to the recently announced Questions feature of the HOA itself – a feature that had previously been in restricted test usage, but which is now available for all users of HOA. So we’ll be using that instead of Google Moderator (which is a rather old tool, without any direct connection into the Hangout app).

The overall HOA system is still new, and it’s not without its issues. For example, panellists have a lot of different places they might need to look, as the call progresses:

  • The “YouTube comment tracker” screen is mutually exclusive from the “Questions” screen: panellists can only have one of these visible to them at a time
  • These screens are in turn mutually exclusive from a text chat window which the panellists can use to chat amongst themselves (for example, to coordinate who will be speaking next) while one of the other panellists is speaking.

Second – and this is what currently makes me most apprehensive – the system seems to put a lot of load on my laptop, whenever I am the moderator of a HOA. I’ve actually seen something similar whenever my laptop is generating video for any long call. The laptop gets hotter and hotter as time progresses, and might even cut out altogether – as happened one hour into the last London Futurists HOA (see the end of this video).

Unfortunately, when the moderator’s PC loses connection to the HOA, the HOA itself seems to shut down (after a short delay, to allow quick reconnections). If this happens again on Sunday, we’ll restart the HOA as soon as possible. The “part two” will be visible on the same Google Plus page, but the corresponding YouTube video will have its own, brand new URL.

Since the last occurrence of my laptop overheating during a video call, I’ve had a new motherboard installed, plus a new hard disk (as the old one was giving some diagnostic errors), and had all the dust cleaned out of my system. I’m keeping my fingers crossed for this Sunday. Technology brings its challenges as well as many opportunities…

Footnote: This threat of over-heating reminds me of a talk I gave on several occasions as long ago as 2006, while at Symbian, about “Horsemen of the apocalypse”, including fire. Here’s a brief extract:

Standing in opposition to the potential for swift continuing increase in mobile technology, however, we face a series of major challenges. I call them “horsemen of the apocalypse”.  They include fire, flood, plague, and warfare.

“Fire” is the challenge of coping with the heat generated by batteries running ever faster. Alas, batteries don’t follow Moore’s Law. As users demand more work from their smartphones, their battery lifetimes will tend to plummet. The solution involves close inter-working of new hardware technology (including multi-core processors) and highly sophisticated low-level software. Together, this can reduce the voltage required by the hardware, and the device can avoid catching fire as it performs its incredible calculations…

22 December 2012

Symbian retrospective: hits and misses

Filed under: More Than Smartphones, Nokia, Psion, retrospection, Symbian, Symbian Story — David Wood @ 12:19 pm

As another calendar year draws to a close, it’s timely to reflect on recent “hits” and “misses” – what went well, and what went less well.

In my case, I’m in the midst of a much longer reflection process, surveying not just the past calendar year, but the entire history (and pre-history) of Symbian – the company that played a significant role in kick-starting the smartphone phenomenon, well before anyone had ever heard of “iPhone” or “Android”. I’m channeling my thoughts into a new book that I’m in the midst of writing, “More than smartphones”. The working subtitle is “Learning from Symbian…”

I’ve got no shortage of source material to draw on – including notes in my electronic diary that go all the way back to January 1992. As I note in my current draft of the introductory chapter,

My analysis draws on an extensive set of notes I’ve taken throughout two decades of leadership positions in and around Symbian – including many notes written in the various Psion PDA organisers that have been my constant electronic companions over these years. These Psion devices have been close to my heart, in more than one sense.

Indeed, the story of Symbian is deeply linked with that of Psion, its original parent. Psion and Symbian were both headquartered in London and shared many of the same personnel…

The PDAs that Psion brought to market in the 1980s and 1990s were the mobile game-changers of their day, generating (albeit on a smaller scale) the same kind of industry buzz as would later manifest around new smartphone releases. Psion PDAs were also the precursors for much of the functionality that subsequently re-emerged in smartphones, satellite navigation products, and other smart mobile devices.

My own Psion electronic diary possibly ranks among the longest continuously maintained personal electronic agendas in the world. The oldest entry in it is at 2.30pm on Friday 31st January, 1992. That entry reads “Swedes+Danes Frampton St”. Therein lies a tale.

At that time, Psion’s commercial departments were located in a building in Frampton Street, in central London, roughly midway between the Edgware Road and Maida Vale tube stations. Psion’s technical teams were located in premises in Harcourt Street, about 15 minutes distance by walking. In 1992, the Psion Series 3a PDA was in an early stage of development, and I was trialling its new Agenda application – an application whose UI and rich set of views were being built by a team under my direction. In parallel, discussions were proceeding with representatives from several overseas distributors and partners, about the process to create versions of Psion PDAs for different languages: German, French, Italian, Spanish… and Swedish and Danish…

As the person who assembled and integrated all the files for different software versions, I met the leads of the teams doing the various translations. That day, 31st January 1992, more than 20 years ago, was among my first meetings with work professionals from the Nordic countries.

I recall that we discussed features such as keyboards that would cater for the additional characters of the Danish and Swedish alphabets, like ‘å’ and ‘ø’. I had no inkling in 1992 that professionals from Denmark, Sweden, and Finland (including employees of mobile phone juggernauts Ericsson and Nokia) would come to have such a far-reaching influence on the evolution of the software which was at that time being designed for the Series 3a. Nor could I foresee the subsequent 20 year evolution of my electronic agenda file:

  • Through numerous pieces of Series 3a hardware
  • Via the Series 3c successor to the Series 3a, with its incrementally improved hardware and software systems
  • Via a one-time migration process to a new data format, for the 32-bit Series 5, which could cope with much larger applications, and with much larger data files (the Series 3 family used a 16-bit architecture)
  • Into the Series 5mx successor of the Series 5
  • Through numerous pieces of Series 5mx hardware – all of which give (in their “About” screen) 1999 as the year of their creation; when one piece of hardware ceases to work, because, say, of problems with the screen display or the hinge mechanism, I transfer the data onto another in my possession…

Why 1999 is the end of this particular run of changes is a fascinating tale in its own right. It’s just one of many fascinating tales that surround the changing fortunes of the players in the Symbian story…

Step forwards from chapter one to the penultimate chapter, “Symbian retrospective”. This is where I’d welcome some extra input from readers of this blog, to complement and refine my own thinking.

This is the first of two retrospective chapters that draw conclusions from the episodes explored in preceding pages. In this chapter, I look at the following questions:

  • Out of all the choices over the years made by the players at the heart of the Symbian world, which ones were the most significant?
  • Of these choices, which were the greatest hits, and which the greatest misses?
  • With the advantage of hindsight, what are the different options that could credibly have been pursued which would have had the greatest impact on Symbian success or failure?

So far, my preliminary outline for that chapter lists a total of twenty hits and misses. Some examples of the hits:

  • Create Symbian with a commercial basis (not a “customers’ cooperative”)
  • Support from multiple key long-term investors (especially Nokia)
  • Enable significant differentiation (including network operator customisation)
  • Focus on performance and stability

And some examples of the misses:

  • Failure to appreciate the importance of the mobile web browser
  • Tolerating Symbian platform fragmentation
  • Failure to provide a CDMA solution
  • Failure to merge Nokia S60 and Symbian

My question for readers of this blogpost is: What would be in your list (say, 1-3 items) of the top hits and misses of decisions made by Symbian?

Footnote: Please accept some delays in your comments appearing. WordPress may hold them in a queue awaiting my review and approval. But I’m in a part of the world with great natural beauty and solitude, where the tour guides request that we all leave our wireless communication devices behind on the ship when we land for the daily excursions. Normally I would have balked at that very idea, but there are times and places when multi-tasking has to stop!

24 August 2012

Duplication stuplication

Filed under: Accenture, Android, brain simulation, Connectivity, cryonics, death, futurist, Symbian — David Wood @ 12:04 am

I had a mixture of feelings when I looked at the display of the Agenda application on my Samsung Note smartphone:

On the face of things, I was going to be very busy at 09:00 that morning – I had five simultaneous meetings to attend!

But they were all the same meeting. And in fact I had already cancelled that meeting. Or, at least, I had tried to cancel that meeting. I had tried to cancel it several times.

The meeting in question – “TPR” – the Technology Planning Review that I chair from time to time inside Accenture Mobility – is a meeting I had organised, on a regularly repeating basis. This particular entry was set to repeat every four weeks. Some time earlier, I had decided that this meeting no longer needed to happen. From my Outlook Calendar on my laptop, I had pressed the button that, ordinarily, would have sent cancellation messages to all attendees. At first, things seemed to go well – the meeting disappeared from sight in my Outlook calendar.

However, a couple of hours later, I noticed it was still there, or had re-appeared. Without giving the matter much thought, I imagined I must have experienced some finger problem, and I repeated the cancellation process.

Some time later, I glanced at my engagements for that day on my smartphone – and my heart sank. The entry was shown no less than nine times, stacked on top of each other. One, two, three, four, five, six, seven, eight, nine. Woops.

(The screenshot above only shows the entry appearing five times. That’s because I deleted four of the occurrences before I had the presence of mind to record the image for posterity.)

To tell the truth, I also had a wry, knowing smile. It was a kind of “aha, this confirms that synchronising agendas can be hard” smile. “Thank goodness there are duplicate entry bugs on Android phones too!”

That uncharitable thought had its roots in many moments of personal embarrassment over the years, whenever I saw examples of duplicated entries on phones running Symbian OS. The software that synchronised agenda information across more than one device – for example, between a PC and a connected Symbian smartphone – got into a confused state on too many occasions. Symbian software had many strengths, but laser accuracy of agenda entry synchronisation was not one of them.

But in this case, there was no Symbian software involved. The bug – whatever it was – could not be blamed on any software (such as Symbian OS) for which I personally had any responsibility.

Nevertheless, I still felt bad. The meeting entry that I had created, and had broadcast to a wide number of my Accenture Mobility colleagues, was evidently misbehaving on their calendars. I had to answer several emails and instant messaging queries: Is this meeting happening or not?

Worse, the same problem applied to every one of the repeating entries in the series. Entries show up in the calendars of lots of my Accenture colleagues, once every four weeks, encouraging them to show up for a meeting that is no longer taking place.

Whether I tried to cancel all the entries in the series, or just an individual entry, the result was the same. Whether I cancelled them from my smartphone calendar or from Outlook on my laptop, the result was the same. Namely, the entry disappeared for a while, but re-appeared a few hours later.

Today I tried again. Looking ahead to the meeting slot planned for 30th August, I thought I would be smart, and deleted the entry, both from my smartphone calendar, and from Outlook on my laptop, within a few seconds of each other, just in case a defective synchronisation between the two devices was to blame. You guessed it: the result was the same. (Though this time it was about three hours before the entry re-appeared, and I was starting to think I had cracked it after all.

So what’s going on? I found a clue in an unexpected place – the email folder of Deleted Items in Microsoft Outlook. This showed an email that was unread, but which had somehow moved directly into the Deleted Items folder, without me seeing it.

The entry read as follows:

Microsoft Outlook on behalf of <one of the meeting participants>

One or more problems with this meeting were detected by Exchange 2010.

This meeting is missing from your calendar. You’re the meeting organizer and some attendees still have the meeting on their calendar.

And just as Outlook had silently moved this email into the Deleted Items folder, without drawing my attention to it, Outlook had also silently reinstated the meeting, in my calendar and (it seems) in everyone else’s calendar, without asking me whether or not that was a good idea. Too darned clever.

I still don’t know how to fix this problem. I half-suspect there’s been some kind of database corruption problem – perhaps caused by Microsoft Exchange being caught out by:

  • Very heavy usage from large numbers of employees (100s of 1000s) within one company
  • Changes in policy for how online meetings are defined and operated, in between when the meeting was first created, and when it was due to take place
  • The weird weather we’ve experienced in London this summer
  • Some other who-knows-what strange environmental race conditions.

However, I hear of tales of other colleagues experiencing similar issues with repeating entries they’ve created, which provides more evidence of a concrete software defect, rather than a random act of the universe.

Other synchronisation problems

As I said, when I reflected on what was happening, I had a wry smile. Synchronisation of complex data between different replications is hard, when the data could be altered in more than one place at the same time.

Indeed, it’s sometimes a really hard problem for software to know when to merge apparent duplicates together, and when to leave them separated. I’m reminded of that fact whenever I do a search in the Contacts application on my Android phone. It often lists multiple entries corresponding to a single person. Some of these entries show pictures, but others don’t. At first, I wasn’t sure why there were multiple entries. But closer inspection showed that some details came from my Google mail archives, some from my collection of LinkedIn connections, some from my set of Facebook Friends, and so on. Should the smartphone simply automatically merge all these instances together? Not necessarily. It’s sometimes not clear whether the entries refer to the same person, or to two people with similar names.

If that’s a comparatively simple example, let me finish with an example that takes things further afield. It’s not about the duplication and potential re-integration of agenda entries. Nor is it about the duplication and potential re-integration of pieces of contacts info. It’s about the duplication and potential re-integration of human minds.

Yes: the duplication and potential re-integration of human minds.

That’s a topic that came up in a presentation in the World Future 2012 conference I attended in Toronto at the end of July.

The talk was given by John M. Smart, founder and president of the Acceleration Studies Foundation. The conference brochure described the talk as follows:

Chemical Brain Preservation: How to Live “Forever”

About 57 million unique and precious human beings die every year, or 155,000 people every day. The memories and identities in their brains are permanently lost at present, but may not be in the near future.

Chemical brain preservation is a technique that many scientists believe may inexpensively preserve our memories and identity when we die, eventually for less than $10,000 per person in the developed world, and less than $3,000 per person in the developing world. Preserved brains can be stored at room temperature in cemeteries, in contract storage, or even in private homes. Our organization, the Brain Preservation Foundation (brainpreservation.org), is offering a $100,000 prize to the first scientific team to demonstrate that the entire synaptic connectivity of mammalian brains, where neuroscientists believe our memories and identities reside, can be perfectly preserved using these low-cost chemical techniques.

There is growing evidence that chemically preserved brains can be “read” in the future, like a computer hard drive, so that memories, and even the complete identities of the preserved individuals can be restored, using low-cost automated techniques. Amazingly, given the accelerating rate of technological advance, a person whose brain is preserved in 2020 might “return” to the world, most likely in a computer form, as early as 2060, while their loved ones and some of their friends are still alive…

Note: this idea is different from cryonics. Cryonics also involves attempted brain preservation, at an ultra-low temperature, but with a view to re-animating the brain some time in the future, once medical science has advanced enough to repair whatever damage brought the person to the point of death. (Anyone serious about finding out more about cryonics might be interested in attending the forthcoming Alcor-40 conference, in October; this conference marks the 40th anniversary of the founding of the most famous cryonics organisation.)

In contrast, the Brain Preservation Foundation talks about reading the contents of a brain (in the future), and copying that information into a computer, where the person can be re-started. The process of reading the info from the brain is very likely to destroy the brain itself.

There are several very large questions here:

  • Could the data of a brain be read with sufficient level of detail, and recreated in another substrate?
  • Once recreated, could that copy of the brain be coaxed into consciousness?
  • Even if that brain would appear to remember all my experiences, and assert that it is me, would it be any less of a preservation of me than in the case of cryonics itself (assuming that cryonics re-animation could work)?
  • Given a choice between the two means of potential resurrection, which should people choose?

The first two of these questions are scientific, whereas the latter two appear to veer into metaphysics. But for what it’s worth, I would choose the cryonics option.

My concern about the whole program of “brain copying” is triggered when I wonder:

  • What happens if multiple copies of a mind are created? After all, once one copy exists in software, it’s probably just as easy to create many copies.
  • If these copies all get re-animated, are they all the same person?
  • Imagine how one of these copies would feel if told “We’re going to switch you off now, since you are only a redundant back-up; don’t worry, the other copies will be you too”

During the discussion in the meeting in Toronto, John Smart talked about the option to re-integrate different copies of a single mind, resulting in a whole that is somehow better than each individual copy. It sounds an attractive idea in principle. But when I consider the practical difficulties in re-integrating duplicated agenda entries, a wry, uneasy smile comes to my lips. Re-integrating complex minds will be a zillion times more complicated. That project could be the most interesting software development project ever.

9 April 2012

Six weeks without Symbian

Filed under: Accenture, Android, Apple, applications, Psion, Samsung, smartphones, Symbian, UIQ — David Wood @ 10:58 am

It’s only six weeks, but in some ways, it feels like six months. That’s how much time has passed since I’ve used a Symbian phone.

These six weeks separate me from nearly thirteen years of reliance on a long series of different Symbian phones. It was mid-1999 when prototype Ericsson R380 smartphones became stable enough for me to start using as my regular mobile phone. Since then, I’ve been carrying Symbian-powered smartphones with me at all times. That’s thirteen years of close interaction with various Symbian-powered devices from Nokia, Ericsson (subsequently Sony Ericsson), and Samsung – interspersed with shorter periods of using Symbian-powered devices from Panasonic, Siemens, Fujitsu, Sendo, Motorola, and LG.

On occasion over these years, I experimented with devices running other operating systems, but my current Symbian device was never far away, and remained my primary personal communication device. These non-Symbian devices always left me feeling underwhelmed – too much functionality was missing, or was served up in what seemed sub-optimal ways, compared to what I had learned to expect.

But ahead of this year’s Mobile World Congress in Barcelona, held 27th Feb to 1st Mar, I found three reasons to gain a greater degree of first-hand experience with Android:

  1. I would be meeting representatives of various companies who were conducting significant development projects using Android, and I wished to speak from “practical knowledge” rather than simply from “book knowledge”
  2. Some of my colleagues from Accenture had developed apps for Android devices, that I wanted to be able to demonstrate with confidence, based on my own recurring experience of these apps
  3. One particular Android device – the Samsung Galaxy Note – seemed to me to have the potential to define a disruptive new category of mobile usage, midway between normal smartphones and tablets, with its radically large (5.3″) screen, contained in a device still light enough and small enough to be easily portable in my shirt-top pocket.

I was initially wary about text entry on the Galaxy Note. My previous encounters with Android devices had always left me frustrated when trying to enter data, without the benefits of a QWERTY keyboard (as on my long-favourite Nokia E6 range of devices), or fluid hand-writing recognition (as on the Sony Ericsson P800/P900/P910).

But in the course of a single day, three separate people independently recommended me to look at the SwiftKey text entry add-on for Android. SwiftKey takes advantage of both context and personal history to predict what the user is likely to be typing into a given window on the device. See this BBC News interview and video for a good flavour of what SwiftKey provides. I installed it and have been using it non-stop ever since.

With each passing day, I continue to enjoy using the Galaxy Note, and to benefit from the wide ecosystem of companies who create applications for Android.

Here’s some of what I really like about the device:

  • The huge screen adds to the pleasure of browsing maps (including “street view”), web pages, and other graphic, video, or textual content
  • Time and again, there are Android apps available that tailor the mobile user experience more closely than web-browsing alone can achieve – see some examples on the adjacent screenshot
  • These apps are easy to find, easy to install, and (in general) easy to use
  • Integration with Google services (Mail, Maps, etc) is impressive
  • I’ve grown to appreciate the notification system, the ubiquitous “back” button, and the easy configurability of the device.

On the other hand, I’m still finding lots of niggles, in comparison with devices I’ve used previously:

  • It’s hard to be sure, but it seems likely to me that I get a working network connection on the device less often than on previous (e.g. Nokia) devices. This means for example that, when people try to ring me, it goes through to my voice mail more often than before, even though my phone appears (to my eyes) to be working. I’m finding that I reboot this device more often than previous devices, to re-establish a working network connection
  • I frequently press the “back” button by accident, losing my current context, for example when turning the phone from portrait to landscape; in those moments, I often silently bemoan the lack of a “forward” button
  • The device is not quite capable of one-handed use – that’s probably an inevitable consequence of having such a large screen
  • Although integration with Google services is excellent, integration with Outlook leaves more to be desired – particularly interaction with email notifications of calendar invites. For example, I haven’t found a way of accepting a “this meeting has been cancelled” notification (in a way that removes the entry from my calendar), nor of sending a short note explaining my reason for declining a given meeting invite, along with the decline notification, etc
  • I haven’t gone a single day without needing to recharge the device part-way through. This no doubt reflects my heavy use of the device. It may also reflect my continuing use of the standard Android web browser, whereas on Symbian devices I always quickly switched to using the Opera browser, with its much reduced data transfer protocols (and swifter screen refreshes)
  • Downloaded apps don’t always work as expected – perhaps reflecting the diversity of Android devices, something that developers often remark about, as a cause of extra difficulty in their work.

Perhaps what’s most interesting to me is that I keep on enjoying using the device despite all these niggles. I reason to myself that no device is perfect, and that several of the issues I’ve experienced are problems of success rather than problems of failure. And I continue to take pleasure out of interacting with the device.

This form factor will surely become more and more significant. Up till now, Android has made little market headway with larger tablets, as reported recently by PC World:

Corporations planning tablet purchases next quarter overwhelmingly voted for Apple’s iPad, a research firm said Tuesday [13th March]

Of the 1,000 business IT buyers surveyed last month by ChangeWave Research who said they would purchase tablets for their firms in the coming quarter, 84% named the iPad as an intended selection.

That number was more than ten times the nearest competitor and was a record for Apple.

However, Samsung’s success with the “phablet” form factor (5 million units sold in less than two months) has the potential to redraw the market landscape again. Just as the iPad has impacted people’s use of laptops (something I see every day in my own household), the Galaxy Note and other phablets have the potential to impact people’s use of iPads – and perhaps lots more besides.

Footnote 1: The Galaxy Note is designed for use by an “S Pen Stylus”, as well as by finger. I’ve still to explore the benefits of this Stylus.

Footnote 2: Although I no longer carry a Symbian smartphone with me, I’m still utterly reliant on my Psion Series 5mx PDA, which runs the EPOC Release 5 precursor to Symbian OS. I use it all the time as my primary Agenda, To-do list, and repository of numerous personal word documents and spreadsheets. It also wakens me up every morning.

Footnote 3: If I put on some rosy-eyed glasses, I can see the Samsung Galaxy Note as the fulfilment of the design vision behind the original “UIQ” device family reference design (DFRD) from the early days at Symbian. UIQ was initially targeted (1997-1999, when it was still called “Quartz”) at devices having broadly the same size as today’s Galaxy Note. The idea received lots of ridicule – “who’s going to buy a device as big as that?” – so UIQ morphed into “slim UIQ” that instead targeted devices like the Sony Ericsson P800 mentioned above. Like many a great design vision, UIQ can perhaps be described as “years ahead of its time”.

10 October 2010

The 10 10 10 vision

Filed under: BHAG, leadership, Symbian, vision — David Wood @ 10:19 am

The phrase “10 10 10” first entered my life at a Symbian Leadership Team offsite, held in Tylney Hall in Hampshire, in early January 2007.  We were looking for a memorable new target for Symbian.

A few months earlier, in November 2006, cumulative sales of Symbian-powered phones had passed the milestone of 100 million units, and quarterly sales were continuing to grow steadily.  It was therefore a reasonable (but still bold) extrapolation for Nigel Clifford, Symbian’s CEO, to predict:

The first 100 million took 8 years [from Symbian’s founding, in June 1998],  the next 100 million will take under 80 weeks

That forecast was shared with all Symbian employees later in the month, as we gathered in London’s Old Billingsgate Hall for the annual Kick Off event.  Nigel’s kick off speech also outlined the broader vision adopted by the Leadership Team at the offsite:

By 2010 we want to be shipping 10 million Symbian devices per month

If we do that we will be in 1 in 10 mobile phones shipping across the planet

So … 10 10 10

Fast forward nearly four years to the 10th of October, 2010 – to 10/10/10.  As I write these words at around 10 minutes past 10 o’clock, how did that vision turn out?

According to Canalys figures reported by the BBC, just over 27 million Symbian-powered devices were sold during Q2 2010:

Worldwide smartphone market

OS Q2 2010 shipments % share Q2 2009 shipments % share Growth
Symbian 27,129,340 43.5 19,178,910 50.3 41.5
RIM 11,248,830 18.0 7,975,950 20.9 41
Android 10,689,290 17.1 1,084,240 2.8 885.9
Apple 8,411,910 13.5 5,211,560 13.7 61.4
Microsoft 3,083,060 4.9 3,431,380 9.0 -10.2
Others 1,851,830 3.0 1,244,620 3.3 48.8
Total 62,414,260 100 38,126,660 100 63.3

Dividing by three, that makes just over 9 million units per month in Q2, which is marginally short of this part of the target.

But more significantly, Symbian failed by some way to have the mindshare, in 2010, that the 2007 Leadership Team aspired to.  As the BBC report goes on to say:

Although Symbian is consistently the most popular smart phone operating system, it is often overshadowed by Apple’s iPhone and Google Android operating system.

I’m a big fan of audacious goals – sometimes called BHAGs.  The vision that Symbian would become the most widely used and most widely liked software platform on the planet, motivated me and many of my colleagues to prodigious amounts of hard work over many years.

In retrospect, were these BHAGs misguided?  It’s too early to tell, but I don’t think so. Did we make mistakes along the way?  Absolutely. Should Symbian employees, nevertheless, take great pride in what Symbian has accomplished?  Definitely. Has the final chapter been written on smartphones?  No way!

But as for myself, my vision has evolved.  I’m no longer a “Symbian smartphone enthusiast”.  Instead, I’m putting my energies into being a “smartphone technology enthusiast“.

I don’t yet have a new BHAG in mind that’s as snappy as either “10 10 10” or “become the most widely used and most widely liked software platform on the planet”, but I’m working on it.

The closest I’ve reached so far is “smartphone technology everywhere“, but that needs a lot of tightening.

Footnote: As far as I can remember, the grainy photo below is another remnant of the Symbian Leadership Team Jan 2007 Tylney Hall offsite.  (The helmets and harnesses were part of a death-defying highwire team-building exercise.  We all lived to tell the tale.)

(From left to right: Standing: Andy Brannan, Charles Davies, Nigel Clifford, David Wood, Kent Eriksson, Kathryn Hodnett, Thomas Chambers, Jorgen Behrens; Squatting: Richard Lowther, Stephen Williams.)

27 August 2010

Reconsidering recruitment

Filed under: Accenture, Psion, recruitment, Symbian — David Wood @ 5:12 am

The team at ITjoblog (‘the blog for IT professionals’) recently asked me to write a guest column for them.  It has just appeared: “Reconsidering recruitment“.

With a few slight edits, here’s what I had to say…

Earlier in my career, I was involved in lots of recruitment.  The software team inside Psion followed a steep headcount trajectory through the process of transforming into Symbian, and continued to grow sharply in subsequent years as many new technology areas were added to the scope of Symbian OS.  As one of the senior software managers in the company throughout this period, I found myself time and again in interviewing and recruitment situations.  I was happy to give significant amounts of my time to these tasks, since I knew what a big impact good (or bad) recruitment can make to organisational dynamics.

In recent weeks, I’ve once again found myself in a situation where considerable headcount growth is expected.  I’m working on a project at Accenture, assisting their Embedded Mobility Services group.  Mobile is increasingly a hot topic, and there’s strong demand for people providing expert consuItancy in a variety of mobile development project settings. This experience has led me to review my beliefs about the best way to carry out recruitment in such situations.  Permit me to think aloud…

To start with, I remain a huge fan of graduate recruitment programs.  The best graduates bring fire in their bellies: a “we can transform the world” attitude that doesn’t know what’s meant to be impossible – and often carries it out!  Of course, graduates typically take some time before they can be deployed in the frontline of commercial software development.  But if you plan ahead, and have effective “bootcamp” courses, you’ll have new life in your teams soon enough.  There will be up-and-coming stars ready to step into the shoes left by any unexpected staff departures or transfers.  If you can hire a group of graduates at the same time, so much the better.  They can club together and help each other, sharing and magnifying what they each individually learn from their assigned managers and mentors.  That’s the beauty of the network effect.

That’s just one examples of the importance of networks in hiring.  I place a big value on having prior knowledge of someone who is joining your team.  Rather than having to trust your judgement during a brief interviewing process, and whatever you can distill from references, you can rely on actual experience of what someone is like to work with.  This effect becomes more powerful when several of your current workforce can attest to the qualities of a would-be recruit, based on all having worked together at a previous company in the past.  I saw Symbian benefit from this effect via networks of former Nortel employees who all knew each other and who could vouch for each others’ capabilities during the recruitment process.  Symbian also had internal networks of former high-calibre people from SCO, and from Ericsson, among other companies.  The benefit here isn’t just that you know that someone is a great professional.  It’s that you already know what their particular special strengths are.  (“I recommend that you give this task to Mike.  At our last company, he did a fantastic job of a similar task.”)

Next, I recommend hiring for flexibility, rather than simply trying to fit a current task description.  I like to see evidence of people coping with ambiguity, and delivering good results in more than one kind of setting.  That’s because projects almost always change; likewise for organisational structures.  So while interviewing, I’m not trying to assess if the person I’m interviewing is the world expert in, say, C++ templates.  Instead, I’m looking for evidence that they could turn their hand to mastering whole new skill areas – including areas that we haven’t yet realised will be important to future projects.

Similarly, rather than just looking for rational intelligence skills, I want to see evidence that someone can fit well into teams.  “Soft skills”, such as inter-personal communication and grounded optimism, aren’t just an optional extra, even for roles with intense analytic content.  The best learning and the best performance comes from … networks (to use that word again) – but you can’t build high-functioning networks if your employees lack soft skills.

Finally, high-performing teams that address challenging problems benefit from internal variation.  So don’t just look for near-clones of people who already work for you.  When scanning CVs, keep an eye open for markers of uniqueness and individuality.  At interview, these markers provide good topics to explore – where you can find out something of the underlying character of the candidate.

Inevitably, you’ll sometimes make mistakes with recruitment, despite taking lots of care in the process.  To my mind, that’s OK.  In fact, it’s better to take a few risks, since you can find some excellent new employees in the process.  But you need to have in place a probation period, during which you pay close attention to how your hires are working out.  If a risky candidate turns out disappointing, even after some coaching and support, then you should act fast – for the sake of everyone concerned.

In summary, I see recruitment and induction as a task that deserves high focus from some of the most skilled and perceptive members of your existing workforce.  Skimp on these tasks and your organisation will suffer – sooner or later.  Invest well in these tasks, and you should see the calibre of your workforce steadily grow.

For further discussion, let me admit that rules tend to have limits and exceptions.  You might find it useful to identify limits and counter-examples to the rules of thumb I’ve outlined above!

19 May 2010

Chapter finished: A journey with technology

Five more days have passed, and I’ve completed another chapter draft (see snapshot below) of my proposed new book.

This takes me up to 30% of what I hope to write:

  • I’ve drafted three out of ten planned chapters.
  • The wordcount has reached 15,000, out of a planned total of 50,000.

After this, I plan to dig more deeply into specific technology areas.  I’ll be moving further out of my comfort area.  First will be “Health”.  Fortuitously, I spent today at an openMIC meeting in Bath, entitled “i-Med: Serious apps for mobile healthcare”.  That provided me with some useful revision!

========

3. A journey with technology

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

Here’s the key question I want to start answering in this chapter: how quickly can technology progress in the next few decades?

This is far from being an academic question. At heart, I want to know whether it’s feasible for that progress to be quick enough to provide technological solutions to the calamitous issues and huge opportunities described in the first chapter of this book. The progress must be quick enough, not only for core technological research, but also for productisation of that technology into the hands of billions of consumers worldwide.

For most of this book, I’ll be writing about technologies from an external perspective. I have limited direct experience with, for example, the healthcare industry and the energy industry. What I have to say about these topics will be as, I hope, an intelligent outside observer. But in this chapter, I’m able to adopt an internal perspective, since the primary subject matter is the industry where I worked for more than twenty years: the smartphone industry.

In June 1988, I started work in London at Psion PLC, the UK-based manufacturer of electronic organisers. I joined a small team working on the software for a new generation of mobile computers. In the years that followed, I spent countless long days, long nights and (often) long weekends architecting, planning, writing, integrating, debugging and testing Psion’s software platforms. In due course, Psion’s software would power more than a million PDAs in the “Series 3” family of devices. However, the term “PDA” was unknown in 1988; likewise for phrases like “smartphone”, “palmtop computer”, and “mobile communicator”. The acronym “PDA”, meaning “personal digital assistant”, was coined by Apple in 1992 in connection with their ambitious but flawed “Newton” project – long before anyone conceived of the name “iPhone”.

I first became familiar with the term “smartphone” in 1996, during early discussions with companies interested in using Psion’s “EPOC32” software system in non-PDA devices. After a faltering start, these discussions gathered pace. In June 1998, ten years after I had joined Psion, a group of Psion senior managers took part in the announcement of the formation of a new entity, Symbian Ltd, which had financial backing from the three main mobile phone manufacturers of the era – Ericsson, Motorola, and Nokia. Symbian would focus on the software needs of smartphones. The initial software, along with 150 employees led by a 5 man executive team, was contributed by Psion. In the years that followed, I held Symbian executive responsibility, at different times, for Technical Consulting, Partnering, and Research. In due course, sales of devices based on Symbian OS exceeded 250 million devices.

In June 2008 – ten more years later, to the day – another sweeping announcement was made. The source code of Symbian OS, along with that of the S60 UI framework and applications from Nokia, would become open source, and would be overseen by a new independent entity, the Symbian Foundation.

My views on the possibilities for radical improvements in technology as a whole are inevitably coloured by my helter-skelter experiences with Psion and Symbian. During these 20+ years of intense projects following close on each others’ heels, I saw at first hand, not only many issues with developing and productising technology, but also many issues in forecasting the development and productisation of technology.

For example, the initial June 1998 business plans for Symbian are noteworthy both for what we got right, and for what we got wrong.

3.1 Successes and shortcomings in predicting the future of smartphones

In June 1998, along with my colleagues on the founding team at Symbian, I strove to foresee how the market for smartphones would unfold in the years ahead. This forecast was important, as it would:

  • Guide our own investment decisions
  • Influence the investment decisions of our partner companies
  • Set the context for decisions by potential employees whether or not to join Symbian (and whether or not to remain with Symbian, once they had joined).

Many parts of our vision turned out correct:

  • There were big growths in interest in computers with increased mobility, and in mobile phones with increased computing capability.
  • Sales of Symbian-powered mobile devices would, by the end of the first decade of the next century, be measured in 100s of millions.
  • Our phrase, “Smartphones for all”, which initially struck many observers as ridiculous, became commonplace: interest in smartphones stopped being the preserve of a technologically sophisticated minority, and became a mainstream phenomenon.
  • Companies in numerous industries realised that they needed strong mobile offerings, to retain their relevance.
  • Rather than every company developing its own smartphone platform, there were big advantages for companies to collaborate in creating shared standard platforms.
  • The attraction of smartphones grew, depending on the availability of add-on applications that delivered functionality tailored to the needs of individual users.

Over the next decade, a range of new features became increasingly widespread on mobile phones, despite early scepticism:

  • Colour screens
  • Cameras – and video recorders
  • Messaging: SMS, simple email, rich email…
  • Web browsing: Google, Wikipedia, News…
  • Social networking: Facebook, Twitter, blogs…
  • Games – including multiplayer games
  • Maps and location-based services
  • Buying and selling (tickets, vouchers, cash).

By 2010, extraordinarily powerful mobile devices are in widespread use in almost every corner of the planet. An average bystander transported from 1998 to 2010 might well be astonished at the apparently near-magical capabilities of these ubiquitous devices.

On the other hand, many parts of our 1998 vision proved wrong.

First, we failed to foresee many of the companies that would be the most prominent in the smartphone industry by the end of the next decade. In 1998:

  • Apple seemed to be on a declining trajectory.
  • Google consisted of just a few people working in a garage. (Like Symbian, Google was founded in 1998.)
  • Samsung and LG were known to the Symbian team, but we decided not to include them on our initial list of priority sales targets, in view of their lowly sales figures.

Second, although our predictions of eventual sales figures for Symbian devices were broadly correct – namely 100s of millions – this was the result of two separate mistakes cancelling each other out:

  • We expected to have a higher share of the overall mobile phone market (over 50% – perhaps even approaching 100%).
  • We expected that overall phone market to remain at the level of 100s of millions per annum – we did not imagine it would become as large as a billion per year.

(A smaller-than-expected proportion of a larger-than-expected market worked out at around the same volume of sales.)

Third – and probably most significant for drawing wider lessons – we got the timescales significantly wrong. It took considerably longer than we expected for:

  • The first successful smartphones to become available
  • Next generation networks (supporting high-speed mobile data) to be widely deployed
  • Mobile applications to become widespread.

Associated with this, many pre-existing systems remained in place much longer than anticipated, despite our predictions that they would fail to be able to adapt to changing market demands:

  • RIM sold more and more BlackBerries, despite repeated concerns that their in-house software system would become antiquated.
  • The in-house software systems of major phone manufacturers, such as Nokia’s Series 40, likewise survived long past predicted “expiry” dates.

To examine what’s going on, it’s useful to look in more detail at three groups of factors:

  1. Factors accelerating growth in the smartphone market
  2. Factors restricting growth in the smartphone market
  3. Factors that can overcome the restrictions and enable faster growth.

Having reviewed these factors in the case of smartphone technology, I’ll then revisit the three groups of factors, with an eye to general technology.

3.2 Factors accelerating growth in the smartphone market

The first smartphone sales accelerator is decreasing price. Smartphones increase in popularity because of price reductions. As the devices become less expensive, more and more people can afford them. Other things being equal, a desirable piece of consumer electronics that has a lower cost will sell more.

The underlying cost of smartphones has been coming down for several reasons. Improvements in underlying silicon technology mean that manufacturers can pack more semiconductors on to the same bit of space for the same cost, creating more memory and more processing power. There are also various industry scale effects. Companies who work with a mobile platform over a period of time gain the benefit of “practice makes perfect”, learning how to manage the supply chain, select lower price components, and assemble and manufacture their devices at ever lower cost.

A second sales accelerator is increasing reliability. With some exceptions (that have tended to fall by the wayside), smartphones have become more and more reliable. They start faster, have longer battery life, and need fewer resets. As such, they appeal to ordinary people in terms of speed, performance, and robustness.

A third sales accelerator is increasing stylishness. In the early days of smartphones, people would often say, “These smartphones look quite interesting, but they are a bit too big and bulky for my liking: frankly, they look and feel like a brick.” Over time, smartphones became smaller, lighter, and more stylish. In both their hardware and their software, they became more attractive and more desirable.

A fourth sales accelerator is increasing word of mouth recommendations. The following sets of people have all learned, from their own experience, good reasons why consumers should buy smartphones:

  • Industry analysts – who write reports that end up influencing a much wider network of people
  • Marketing professionals – who create compelling advertisements that appear on film, print, and web
  • Retail assistants – who are able to highlight attractive functionality in devices, at point of sale
  • Friends and acquaintances – who can be seen using various mobile services and applications, and who frequently sing the praises of specific devices.

This extra word of mouth exists, of course, because of a fifth sales accelerator – the increasing number of useful and/or entertaining mobile services that are available. This includes built-in services as well as downloadable add-on services. More and more individuals learn that mobile services exist which address specific problems they experience. This includes convenient mobile access to banking services, navigation, social networking, TV broadcasts, niche areas of news, corporate databases, Internet knowledgebases, tailored educational material, health diagnostics, and much, much more.

A sixth sales accelerator is increasing ecosystem maturity. The ecosystem is the interconnected network of companies, organisations, and individuals who create and improve the various mobile services and enabling technology. It takes time for this ecosystem to form and to learn how to operate effectively. However, in due course, it forms a pool of resources that is much larger than exists just within the first few companies who developed and used the underlying mobile platform. These additional resources provide, not just a greater numerical quantity of mobile software, but a greater variety of different innovative ideas. Some ecosystem members focus on providing lower cost components, others on providing components with higher quality and improved reliability, and yet others on revolutionary new functionality. Others again provide training, documentation, tools, testing, and so on.

In summary, smartphones are at the heart of a powerful virtuous cycle. Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle. Applications and services which prove their value as add-ons for one generation of smartphones become bundled into the next generation. With this extra built-in functionality, the next generation is intrinsically more attractive, and typically is cheaper too. Developers see an even larger market and increase their efforts to supply software for this market.

3.3 Factors restricting growth in the smartphone market

Decreasing price. Increasing reliability. Increasing stylishness. Increasing word of mouth recommendations. Increasingly useful mobile services. Increasing ecosystem maturity. What could stand in the way of these powerful accelerators?

Plenty.

First, there are technical problems with unexpected difficulty. Some problems turn out to be much harder than initially imagined. For example, consider speech recognition, in which a computer can understand spoken input. When Psion planned the Series 5 family of PDAs in the mid 1990s (as successors to the Series 3 family), we had a strong desire to include speech recognition capabilities in the device. Three “dictaphone style” buttons were positioned in a small unit on the outside of the case, so that the device could be used even when the case (a clamshell) was shut. Over-optimistically, we saw speech recognition as a potential great counter to the pen input mechanisms that were receiving lots of press attention at the time, on competing devices like the Apple Newton and the Palm Pilot. We spoke to a number of potential suppliers of voice recognition software, who assured us that suitably high-performing recognition was “just around the corner”. The next versions of their software, expected imminently, would impress us with its accuracy, they said. Alas, we eventually reached the conclusion that the performance was far too unreliable and would remain so for the foreseeable future – even if we went the extra mile on cost, and included the kind of expensive internal microphone that the suppliers recommended. We feared that “normal users” – the target audience for Psion PDAs – would be perplexed by the all-too-frequent inaccuracies in voice recognition. So we took the decision to remove that functionality. In retrospect, it was a good decision. Even ten years later, voice recognition functionality on smartphones generally fell short of user expectations.

Speech recognition is just one example of a deeply hard technical problem, that turned out to take much longer than expected to make real progress. Others include:

  • Avoiding smartphone batteries being drained too quickly, from all the processing that takes place on the smartphone
  • Enabling rapid search of all the content on a device, regardless of the application used to create that content
  • Devising a set of application programming interfaces which have the right balance between power-of-use and ease-of-use, and between openness and security.

Second, there are “chicken-and-egg” coordination problems – sometimes also known as “the prisoner’s dilemma”. New applications and services in a networked marketplace often depend on related changes being coordinated at several different points in the value chain. Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make less sense when viewed individually. For example, successful mobile phones required both networks and handsets. Successful smartphones required new data-enabled networks, new handsets, and new applications. And so on.

Above, I wrote about the potential for “a powerful virtuous cycle”:

Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle.

However, this only works once the various factors are all in place. A new ecosystem needs to be formed. This involves a considerable coordination problem: several different entities need to un-learn old customs, and adopt new ways of operating, appropriate to the new value chain. That can take a lot of time.

Worse – and this brings me to a third problem – many of the key players in a potential new ecosystem have conflicting business models. Perhaps the new ecosystem, once established, will operate with greater overall efficiency, delivering services to customers more reliably than before. However, wherever there are prospects of cost savings, there are companies who potentially lose out – companies who are benefiting from the present high prices. For example, network operators making healthy profits from standard voice services were (understandably) apprehensive about distractions or interference from low-profit data services running over their networks. They were also apprehensive about risks that applications running on their networks would:

  • Enable revenue bypass, with new services such as VoIP and email displacing, respectively, standard voice calls and text messaging
  • Saturate the network with spam
  • Cause unexpected usability problems on handsets, which the user would attribute to the network operator, entailing extra support costs for the operator.

The outcome of these risks of loss of revenue is that ecosystems might fail to form – or, having formed with a certain level of cooperation, might fail to attain deeper levels of cooperation. Vested interests get in the way of overall progress.

A fourth problem is platform fragmentation. The efforts of would-be innovators are spread across numerous different mobile platforms. Instead of a larger ecosystem all pulling in the same direction, the efforts are diffused, with the risk of confusing and misleading participants. Participants think they can re-apply skills and solutions from one mobile product in the context of another, but subtle and unexpected differences cause incompatibilities which can take a lot time to debug and identify. Instead of collaboration effectively turning 1+1 into 3, confusion turns 1+1 into 0.5.

A fifth problem is poor usability design. Even though a product is powerful, ordinary end users can’t work out how to operate it, or get the best experience from it. They feel alienated by it, and struggle to find their favourite functionality in amongst bewildering masses of layered menu options. A small minority of potential users, known as “technology enthusiasts”, are happy to use the product, despite these usability issues; but they are rare exceptions. As such, the product fails to “cross the chasm” (to use the language of Geoffrey Moore) to the mainstream majority of users.

The sixth problem underlies many of the previous ones: it’s the problem of accelerating complexity. Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos can ensue:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window.
  • Smartphone application development may grow in difficulty, as developers need to juggle different programming interfaces and optimisation methods.
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.

3.4 Factors that can overcome the restrictions and enable faster growth

Technical problems with unexpected difficulty. Chicken-and-egg coordination problems. Conflicting business models. Platform fragmentation. Poor usability design. Accelerating complexity. These are all factors that restrict smartphone progress. Without solving these problems, the latent potential of smartphone technology goes unfulfilled. What can be done about them?

At one level, the answer is: look at the companies who are achieving success with smartphones, despite these problems, and copy what they’re doing right. That’s a good starting point, although it risks being led astray by instances where companies have had a good portion of luck on their side, in addition to progress that they merited through their own deliberate actions. (You can’t jump from the observation that company C1 took action A and subsequently achieved market success, to the conclusion that company C2 should also take action A.) It also risks being led astray by instances where companies are temporarily experiencing significant media adulation, but only as a prelude to an unravelling of their market position. (You can’t jump from the observation that company C3 is currently a media darling, to the conclusion that a continuation of what it is currently doing will achieve ongoing product success.) With these caveats in mind, here is the advice that I offer.

The most important factor to overcome these growth restrictions is expertise – expertise in both design and implementation:

  • Expertise in envisioning and designing products that capture end-user attention and which are enjoyable to use again and again
  • Expertise in implementing an entire end-to-end product solution.

The necessary expertise (both design and implementation) spans eight broad areas:

  1. technology – such as blazing fast performance, network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security
  2. ecosystem design – to solve the “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised
  3. business models – identifying new ways in which groups of companies can profit from adopting new technology
  4. community management – encouraging diverse practitioners to see themselves as part of a larger whole, so that they are keen to contribute
  5. user experience – to ensure that the resulting products will be willingly accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts)
  6. agile project management – to avoid excess wasted investment in cases where project goals change part way through (as they inevitably do, due to the uncertain territory being navigated)
  7. lean thinking – including a bias towards practical simplicity, a profound distrust of unnecessary complexity, and a constant desire to identify and deal with bottleneck constraints
  8. system integration – the ability to pull everything together, in a way that honours the core product proposition, and which enables subsequent further evolution.

To be clear, I see these eight areas of expertise as important for all sectors of complex technology development – not just in the smartphone industry.

Expertise isn’t something that just exists in books. It manifests itself:

  • In individual people, whose knowledge spans different domains
  • In teams – where people can help and support each other, playing to everyone’s strengths
  • In tools and processes – which are the smart embodiment of previous generations of expertise, providing a good environment to work out the next generation of expertise.

In all three cases, the expertise needs to be actively nurtured and enhanced. Companies who under-estimate the extent of the expertise they need, or who try to get that expertise on the cheap – or who stifle that expertise under the constraints of mediocre management – are likely to miss out on the key opportunities provided by smartphone technology. (Just because it might appear that a company finds it easy to do various tasks, it does not follow that these tasks are intrinsically easy to carry out. True experts often make hard tasks look simple.)

But even with substantial expertise available and active, it remains essentially impossible to be sure about the timescales for major new product releases:

  • Novel technology problems can take an indeterminate amount of time to solve
  • Even if the underlying technology progresses quickly, the other factors required to create an end-to-end solution can fall foul of numerous unforeseen delays.

In case that sounds like a depressing conclusion, I’ll end this section with three brighter thoughts:

First, if predictability is particularly important for a project, you can increase your chances of your project hitting its schedule, by sticking to incremental evolutions of pre-existing solutions. That can take you a long way, even though you’ll reduce the chance of more dramatic breakthroughs.

Second, if you can afford it, you should consider running two projects in parallel – one that sticks to incremental evolution, and another that experiments with more disruptive technology. Then see how they both turn out.

Third, the relationship between “speed of technology progress” and “speed of product progress” is more complex than I’ve suggested. I’ve pointed out that the latter can lag the former, especially where there’s a shortage of expertise in fields such as ecosystem management and the creation of business models. However, sometimes the latter can move faster than the former. That occurs once the virtuous cycle is working well. In that case, the underlying technological progress might be exponential, whilst the productisation progress could become super-exponential.

3.5 Successes and shortcomings in predicting the future of technology

We all know that it’s a perilous task to predict the future of technology. The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars. These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary. Indeed, history is littered with curious and amusing examples of flawed predictions of the future. Popular science fiction fares no better:

  • The TV series “Lost in space”, which debuted in 1965, featured a manned spacecraft leaving Earth en route for a distant star, Alpha Centauri, on 16 October 1997.
  • Arthur C Clarke’s “2001: a space odyssey”, made in 1968, featured a manned spacecraft flight to Jupiter.
  • Philip K Dick’s novel “Do Androids Dream of Electric Sheep?”, coincidentally also first published in 1968, described a world set in 1992 in which androids (robots) are extremely hard to distinguish from humans. (Later editions of the novel changed the date to 2021 – the date adopted by the film Bladerunner which was based on the novel.)

Forecasts often go wrong when they spot a trend, and then extrapolate it. Projecting trends into the future is a dangerous game:

  • Skyscrapers rapidly increased in height in the early decades of the 20th century. But after the Empire State Building was completed in 1931, the rapid increases stopped.
  • Passenger aircraft rapidly increased in speed in the middle decades of the 20th century. But after Concorde, which flew its maiden flight in 1969, there have been no more increases.
  • Manned space exploration went at what might be called “rocket pace” from the jolt of Sputnik in 1957 up to the sets of footprints on the Moon in 1969-1972, but then came to an abrupt halt. At the time of writing, there are still no confirmed plans for a manned trip to Mars.

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects. Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on. Just because a technology improved in the past, it does not mean there will be sufficient societal motivation to keep on improving it in the future. Technology is not enough. Especially for changes that are complex and demanding, up to six additional criteria need to be satisfied as well:

  1. The technological development has to satisfy a strong human need.
  2. The development has to be possible at a sufficiently attractive price to individual end users.
  3. The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle.
  4. There must be a clear implementation path whereby the eventual version of the technology can be attained through a series of steps that are, individually, easier to achieve.
  5. When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be open (to accept new ideas).
  6. Likewise, the development process must be commercially attractive, or provide some other strong incentive, to encourage the generation of new ideas, and, even more important, to encourage people to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation.

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications. For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son. The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century. The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world! As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As mentioned earlier, an important factor that can allow accelerating technological progress is the establishment of an operational virtuous cycle that provides positive feedback. Here are four more examples:

  1. The first computers were designed on paper and built by hand. Later computers benefited from computer-aided design and computer-aided manufacture. Even later computers benefit from even better computer-aided design and manufacture…
  2. Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly – including more powerful tools…
  3. More powerful hardware enables new software which enables new use cases which demand more innovation in improving the hardware further…
  4. Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…

A well-functioning virtuous cycle makes it more likely that technological progress can continue. But the biggest factor determining whether a difficult piece of progress occurs is often the degree of society’s motivation towards that progress. Investment in ever-faster passenger airlines ceased, because people stopped perceiving that ever-faster airlines were that important. Manned flight to Mars was likewise deemed to be insufficiently important: that’s why it didn’t take place. The kinds of radical technological progress that I discuss in this book are, I believe, all feasible, provided sufficient public motivation is generated and displayed in support of that progress. This includes major enhancements in health, education, clean energy, artificial general intelligence, human autonomy, and human fulfilment. The powerful public motivation will cause society to prioritise developing and supporting the types of rich expertise that are needed to make this technological progress a reality.

3.6 Moore’s Law: A recap

When I started work at Psion, I was given a “green-screen” console terminal, connected to a PDP11 minicomputer running VAX VMS. That’s how I wrote my first pieces of software for Psion. A short while afterwards, we started using PCs. I remember that the first PC I used had a 20MB hard disk. I also remember being astonished to find that a colleague had a hard disk that was twice as large. What on earth does he do with all that disk space, I wondered. But before long, I had a new PC with a larger hard disk. And then, later, another new one. And so on, throughout my 20+ year career in Psion and Symbian. Each time a new PC arrived, I felt somewhat embarrassed at the apparent excess of computing power it provided – larger disk space, more RAM memory, faster CPU clock speed, etc. On leaving Symbian in October 2009, I bought a new laptop for myself, along with an external USB disk drive. That disk drive was two terabytes in size. For roughly the same amount of money (in real terms) that had purchased 20MB of disk memory in 1989, I could now buy a disk that was 100,000 times larger. That’s broadly equivalent to hard disks doubling in size every 15 months over that 20 year period.

This repeated doubling of performance, on a fairly regular schedule, is a hallmark of what is often called “Moore’s Law”, following a paper published in 1965 by Gordon Moore (subsequently one of the founders of Intel). It’s easy to find other examples of this exponential trend within the computing industry. University of London researcher Shane Legg has published a chart of the increasing power of the world’s fastest supercomputers, from 1960 to the present day, along with a plausible extension to 2020. This chart measures the “FLOPS” capability of each supercomputer – the number of floating point (maths) operations it can execute in a second. The values move all the way from kiloFLOPS through megaFLOPS, gigaFLOPS, teraFLOPS, and petaFLOPS, and point towards exaFLOPS by 2020. Over sixty years, the performance improves through twelve and a half orders of magnitude, which is more than 40 doublings. This time, the doubling period works out at around 17 months.

Radical futurist Ray Kurzweil often uses the following example:

When I was an MIT undergraduate in 1965, we all shared a computer that took up half a building and cost tens of millions of dollars. The computer in my pocket today [a smartphone] is a million times cheaper and a thousand times more powerful. That’s a billion-fold increase in the amount of computation per dollar since I was a student.

A billion-fold increase consists of 30 doublings – which, spread out over 44 years from 1965 to 2009, gives a doubling period of around 18 months. And to get the full picture of the progress, we should include one more observation alongside the million-fold price improvement and thousand-fold processing power improvement: the 2009 smartphone is about one hundred thousand times smaller than the 1965 mainframe.

These steady improvements in computer hardware, spread out over six decades so far, are remarkable, but they’re not the only example of this kind of long-term prodigious increase. Martin Cooper, who has a good claim to be considered the inventor of the mobile phone, has pointed out that the amount of information that can be transmitted over useful radio spectrum has roughly doubled every 30 months since 1897, when Guglielmo Marconi first patented the wireless telegraph:

The rate of improvement in use of the radio spectrum for personal communications has been essentially uniform for 104 years. Further, the cumulative improvement in the effectiveness of personal communications total spectrum utilization has been over a trillion times in the last 90 years, and a million times in the last 45 years

Smartphones have benefited mightily from both Moore’s Law and Cooper’s Law. Other industries can benefit in a similar way too, to the extent that their progress can be driven by semiconductor-powered information technology, rather than by older branches of technology. As I’ll review in later chapters, there are good reasons to believe that both medicine and energy are on the point of dramatic improvements along these lines. For example, the so-called Carlson curves (named after biologist Rob Carlson) track exponential decreases in the costs of both sequencing (reading) and synthesising (writing) base pairs of DNA. It cost about $10 to sequence a single base pair in 1990, but this had reduced to just 2 cents by 2003 (the date of the completion of the human genome project). That’s 9 doublings in just 13 years – making a doubling period of around 17 months.

Moore’s Law and Cooper’s Law are far from being mathematically exact. They should not be mistaken for laws of physics, akin to Newton’s Laws or Maxwell’s Laws. Instead, they are empirical observations, with lots of local deviations when progress temporarily goes either faster or slower than the overall average. Furthermore, scientists and researchers need to keep on investing lots of skill, across changing disciplines, to keep the progress occurring. The explanation given on the website of Martin Cooper’s company, ArrayComm, provides useful insight:

How was this improvement in the effectiveness of personal communication achieved? The technological approaches can be loosely categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum.

How much of the improvement can be attributed to each of these categories? Of the million times improvement in the last 45 years, roughly 25 times were the result of being able to use more spectrum, 5 times can be attributed to the ability to divide the radio spectrum into narrower slices — frequency division. Modulation techniques like FM, SSB, time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so. The remaining sixteen hundred times improvement was the result of confining the area used for individual conversations to smaller and smaller areas — what we call spectrum re-use…

Cooper suggests that his law can continue to hold until around 2050. Experts at Intel say they can foresee techniques to maintain Moore’s Law for at least another ten years – potentially longer. In assessing the wider implications of these laws, we need to consider three questions:

  1. How much technical runway is left in these laws?
  2. Can the benefits of these laws in principle be applied to transform other industries?
  3. Will wider system effects – as discussed earlier in this chapter – frustrate overall progress in these industries (despite the technical possibilities), or will they in due course even accelerate the underlying technical progress?

My answers to these questions:

  1. Plenty
  2. Definitely
  3. It depends on whether we can educate, motivate, and organise a sufficient critical mass of concerned citizens. The race is on!

>> Next chapter >>

Older Posts »

Blog at WordPress.com.