dw2

25 June 2008

A tale of two meetings

Filed under: climate change, collaboration, Nuclear energy, SitP, solar energy, Spiked — David Wood @ 10:31 pm

In the past, I’ve enjoyed several meetings of the London Skeptics in the Pub (“SitP”). More than 100 people cram into the basement meeting space of Penderel’s Oak in Holborn, and listen to a speaker cover a contentious topic – such as alternative medicine, investigating the paranormal, the “moon landings hoax”. What’s typically really enjoyable is the extended Q&A session in the second half of the meeting, when the audience often dissect the speaker’s viewpoint. Attendee numbers have crept up steadily over the nine years the group has existed. It’s little surprise that the group was voted into the Top Ten London Communities 2008 by Time Out.

Last night, the billed speaker was the renowned (many would say “infamous”) climate change denier, Fred Singer. The talk was advertised as follows:

Global Warming: Science, Economics, and some Moral Issues: What Al Gore Never Told You.

The science is settled: Evidence clearly demonstrates that carbon dioxide contributes insignificantly to Global Warming and is therefore not a ‘pollutant.’ This fact has not yet been widely recognized, and irrational Global Warming fears continue to distort energy policies and foreign policy. All efforts to curtail CO2 emissions, whether global, federal, or at the state level, are pointless — and in any case, ineffective and very costly. On the whole, a warmer climate is beneficial. Fred will comment on the vast number of implications.

Since this viewpoint is so far removed from consensus scientific thinking, I was hoping for a cracking debate. And indeed, the evening started well. Singer turned out to be a better speaker than I expected. Even though he’s well into his 80s, he spoke with confidence, courtesy, and good humour. And he had some interesting material:

  • A graph that seemed to show that global temperature has not been rising over the last ten years (even though atmospheric CO2 has incontrovertibly been rising over that time period)
  • A claim that all scientific models of atmospheric warming are significantly at variance with observed data (and therefore, we shouldn’t give these models much credence)
  • Suggestions that global warming is more strongly influenced by cosmic rays than by atmospheric CO2.

(The contents of the talk were similar to what’s in this online article.)

So I eagerly anticipated the Q&A. But oh, what a disappointment. I found myself more and more frustrated:

  • Quite a few of the audience members seemed incapable of asking a clear, relevant, concise question. Instead, they tended to go off on tangents, or went round and round in circles. (To my mind, the ability to identify and ask the key question, without distraction, is an absolutely vital skill for the modern age.)
  • Alas, the speaker could not hear the questions (being, I guess, slightly deaf from his advanced age); so they had to be repeated by the meeting moderator, who was standing at the front next to the speaker
  • The moderator often struggled to capture the question from what the audience member had said, so there were several iterations here
  • Then the speaker frequently took a LONG time to answer the question. (He was patient and polite, but he was also painstakingly SLOW.)

Result: lots of time wasted, in my view. No one landed anything like a decisive refutation of the speaker’s claims. There were lots of good questions that should have been asked, but time didn’t allow it. I also blamed myself, for not having done any research prior to the meeting (but I had been pretty busy on other matters for the last few days), and for not being able to do my usual trick of looking up information on my smartphone during a meeting (via Google, Wikipedia, etc) because network reception was very poor in the part of the basement where I was standing. In conclusion, although the discussion was fun, I don’t think we got anything like the best possible discussion that the speakers’ presentation deserved.

I mention all this, not just because I’m deeply concerned about the fearsome prospects of runaway global warming, but also because I’m interested in the general question of how to organise constructive debates that manage to reach to the heart of the matter (whatever the matter is).

As an example of a meeting that did have a much better debate, let me mention the one I attended this evening. It was hosted by Spiked, and was advertised as follows:

Nuclear power: what’s the alternative? The future of energy in Britain

As we seek to overcome our reliance on fossil fuels, what are the alternatives? Offshore turbines and wind farms are often cited as options but can they really meet more than a fraction of the UK’s energy needs? If not, is nuclear power a viable alternative? Public anxieties about nuclear plants’ safety, their susceptibility to terrorist attacks, and the problem of safely disposing of radioactive waste persist. But to what extent are these concerns justified? Is the real issue the public’s perception of both the risks and potential of nuclear energy? Ultimately, does nuclear energy, be it the promise of fusion or the reality of fission, finally mean we can stop guilt-tripping about energy consumption?

Instead of just one speaker, there were five, who had a range of well-argued but differing viewpoints. And the chairperson, Timandra Harkness (Director of Cheltenham Science Festival’s Fame Lab) was first class:

  • She made it clear that each speaker was restricted to 7 minutes for their opening speech (and they all kept to this limit, with good outcomes: focus can have wonderful results)
  • Then there were around half a dozen questions from the floor, asked one after the other, before the speaker panel were invited to reply
  • There were several more rounds of batched up questions followed by responses
  • Because of the format, the speakers had the option of ignoring the (few) irrelevant questions, and could concentrate on the really interesting ones.

For the record, I thought that all the speakers made good points, but Keith Barnham, co-founder of the solar cell manufacturing company Quantasol, was particularly interesting, with his claims for the potential of new generation photovoltaic concentrator solar cells. (This topic also featured in a engrossing recent Time article.) He recommended that we put our collective hope for near-future power generation “in the [silicon] industry that gave us the laptop and the mobile phone, rather than the industry that gave us Chernobyl and Sellafield”. (Ouch!) Advances in silicon have time and again driven down the prices of mobile phones; these benefits will also come quickly (Barnham claimed) to the new generation solar cells.

But the conclusion I want to draw is that the best way to ensure a great debate is to have a selection of speakers with complementary views, to insist on focus, and to chair the meeting particularly well. Yes, collaboration is hard – but when it works, it’s really worth it!

Footnote: the comparision between the Skeptics in the Pub meeting and the Spiked one is of course grossly unfair, since the former is run on a shoestring (there’s a £2 charge to attend) whereas the latter has a larger apparatus behind it (the entry charge was £10, payable in advance; and there’s corporate sponsorship from Clarke Mulder Purdie). But hey, I still think there are valid learnings from this tale of two different meetings – each interesting and a good use of time, but one ultimately proving much more satisfactory than the other.

24 June 2008

Symbian 2-0

Filed under: Nokia, Open Source, Symbian Foundation — David Wood @ 6:13 am

Months of planning culminated this morning with the announcement of an intended dramatic evolution for Symbian – an evolution that should decisively advance the Symbian platform toward its long-anticipated status of being the most widely used software platform on the planet.

The announcement of the Symbian Foundation comes on the very first day of the second decade of Symbian’s existence. It also sets the scene for a much wider participation by companies and individuals in the development and deployment of novel services and applications for all sorts of new and improved Symbian-powered mobile devices. Because this second decade of Symbian’s history should witness radically greater collaboration than before, the designation “Symbian 2.0” seems doubly apt.

Subject to the deal receiving regulatory approval, I envision a whole series of far-reaching changes to take place in the months and years ahead:

  • It will become possible for the best practices of Open Source Software to be applied in and around the software platform that is the most suited to smartphones
  • Closer working relations between personnel from Symbian and S60 teams will result in more efficient development, accelerating the rate at which the overall platform improves
  • The lower barriers to usage of the Symbian platform should mean that the number of customers and partners will rocket
  • The unification of the formerly separate UI systems will further increase the attractiveness of the new platform
  • The platform will be royalty free – which will be another factor to boost usage
  • Because of increased adoption of the platform, the ecosystem will also grow, through the OS-ES volume-value virtuous cycle mechanism
  • For all these reasons, smartphone innovation should jump forward in pace, to the potential benefit of all participants in the ever expanding, ever richer, converged mobile industry
    Customers and partners alike – both newcomers and old-timers – will be on the lookout for fresh options for differentiation and support
  • In short, there will be lots of new opportunities for people with knowledge of the Symbian platform.

Great credit is due to Symbian’s shareholders, and especially to Nokia, for enabling and driving this bold and powerful initiative.

Of course, with such a large change, there’s considerable uncertainty about how everything will work out. Many people will be unsure exactly where they, personally, will end up in this new world. Lots of details remain to be decided. But the basic direction is clear: participants in the Symbian 2.0 ecosystem will be part of a much bigger game than before. It’s going to be exciting – and no doubt somewhat scary too. As Symbian’s first CEO, Colly Myers, used to say, “Let’s rock and roll!”

Postscript: For a clear rationale of some key aspects of the Symbian Foundation plan, take a look at what my Symbian colleague John Forsyth has to say, here.

23 June 2008

Fragmentation is easy, integration is hard

Filed under: Android, fragmentation, integration — David Wood @ 2:13 pm

The Wall Street Journal reports today that “Google’s Mobile-Handset Plans Are Slowed“. The Inquirer picks up the story and adds a few choice morsels of its own: “Depressing news as Google’s Android delayed“:

However, life’s little crises just kept getting the Android down and now apparently some mobile network operators like Sprint Nextel, have abandoned any attempt to get an Android on the market until 2009. This is purportedly because the majority of Google’s attention and resources have been going to Sprint’s competitor T-Mobile USA, who still hope to have an Android mobile out by the end of Q4. We have it on good authority (from un-named sources of course) that Sprint actually asked Google “Do you want me to sit in the corner and rust, or just fall apart where I’m standing?”…

Director of mobile platforms at Google, Andy Rubin, gloomily noted that trying to develop software while the company’s irritating partners kept pushing for new features, was a time-consuming task. “This is where the pain happens”, he sighed.

I recognise this pain. It’s something that has occurred many times during Symbian’s history. That’s why I’ve emphasised a dilemma facing Android: Fragmentation is easy, but integration is hard. Coping with multiple forceful customers at the same time, while your codebase is still immature, is a really tough challenge. Glitzy demos of v2 features don’t help matters: they drive up interest that needs to be deflated, as you have to explain to customers that, no, these features aren’t ready to include in the software package for their next phones, despite looking brilliant on YouTube. Instead, the focus needs to go on the hard, hard task of integration.

22 June 2008

Reasons why humans will be smarter by 2020

Filed under: Flynn effect, IA, smartphones — David Wood @ 9:51 pm

Alvis Brigis has published a provocative article on FutureBlogger, “How smart will humans be by 2020?” The article looks at technology and social trends which can provide so-called IA – “Intelligence Amplification“. (IA is sometimes expanded, instead, to “Intelligence Augmentation”.)

Alvis produces a compelling list of intelligence-amplifying trends:

  • Widening bandwidth (Faster internet connections, pervasive WiFi…)
  • Growing global information
  • Evolving social media (including Wikipedia…)
  • Video-to-video chat
  • Evolving 3D and immersive media (including Second Life, Google Earth, and GTA4)
  • Better search
  • New interface products (including touchscreens, mini-projectors, haptic feedback…)
  • Improved portable battery power
  • Time-savers (such as robots and more efficient machines)
  • Translators (akin to the Babelfish of HHGG)
  • Rising value of attention (including more relevant targeted ads)
  • Direct brain-computer interfaces
  • Health benefits (from advances in nanotech, biology, pharma, etc).

One reason I’m personally very positive about smartphones is that I believe in their potential to “make their users smarter”. I’ve often said, only half-joking, that I view my Psion Series 5mx PDA as my “second brain”, and my current smartphone as my “third brain”. Convenient and swift access to information from any location, whenever the need arises, is only part of the benefit. The organiser capabilities can play a big role too – as does the connectivity to people and communities (rather than just to information stores). So in my mind, the potential of smartphones includes people who increasingly:

  • Know what’s important
  • Know what they want to achieve in life
  • Know how to get it.

PS For wider thoughts about the reasons for improved intelligence, see this recent interview by Alvis Brigis of James Flynn (the discoverer of what has come to be known as the “Flynn effect“)

PPS I’d like to include the FutureBlogger posts in my blogroll, right, but everytime I feed Blogger the URL http://www.memebox.com/futureblogger to include in the blogroll, it gets changed into a link to a different blog. Does anyone know how to fix this?

21 June 2008

Open minds about open source

Filed under: Open Source — David Wood @ 3:55 pm

There’s been a surprising amount of heat (not to mention vitriol) in the responses to recent blog postings from Ari Jaaksi of Nokia on the topic of the potential mutual benefits of a constructive encounter between Open Source developers and the companies who make money from mobile telephony.

Ari’s message (in “Some learning to do?“, and again in “Good comments from Bruce“) is that there’s a need for two-way learning, and for open minds. To me, that seems eminently sensible. This topic has so many angles (and is changing so quickly) that we shouldn’t expect anyone to have a complete set of answers in place. But quite a few online responses take a different stance, basically saying that there’s nothing for Open Source developers to learn – they know it all already – and that any movement must be on the side of the mobile phone business companies. The mountain will have to come to Mohammed.

At the same time as I’ve been watching that debate (with growing disbelief), I’ve been thumbing my way through the 500+ page book “Perspectives on Free and Open Source Software”. This book contains 24 chapters (all written by different authors), one introduction (by the joint editors of the book: Joseph Feller, Brian Fitzgerald, Scott Hissam, and Karim Lakhani), one foreword (by Michael Cusumano), and one epilogue (by Clay Shirky). The writers range in their attitudes toward Open Source, all the way from strong enthusiasm to considerable scepticism. They’ve all got interesting things to say. But they have several things in common (which sets them apart from the zealotry in the online blog responses):

  • An interest to find and then examine data and facts
  • A willingness to engage in dialog and debate
  • A belief that Open Source is now well established, and won’t be disappearing – but also a belief that this observation is only the beginning of the discussion, rather than the end.

Another thing I like about the book is the way the Introduction sets out a handy list of questions, which readers are asked to keep in their minds as they review the various chapters. This makes it clear, again, that there’s still a lot to be worked out, regarding the circumstances in which Open Source is a good solution to particular technical challenges.

It’s a bit unfair to try to summarise 500+ pages in just a few paragraphs, but the following short extracts give a good flavour in my view. From Michael Cusumano’s introduction:

Most of the evidence in this book suggests that Open Source methods and tools resemble what we see in the commercial sector and do not themselves result in higher quality. There is good, bad, and average software code in all software products. Not all Open Source programmers write neat, elegant software modules, and then carefully test as well as document their code. Moreover, how many “eyeballs” actually view an average piece of Open Source code? Not as many as Eric Raymond would have us believe.

After reading the diverse chapters in this book, I remain fascinated but still skeptical about how important Open Source will be in the long run and whether, as a movement, it is raising unwarranted excitement among users as well as entrepreneurs and investors…

The conclusion I reach … is that the software world is diverse as well as fascinating in its contrasts. Most likely, software users will continue to see a co-mingling of free, Open Source, and proprietary software products for as far as the eye can see. Open Source will force some software products companies to drop their prices or drop out of commercial viability, but other products and companies will appear. The business of selling software products will live on, along with free and Open Source programs.

And from Clay Shirky’s epilogue:

Open Source methods can create tremendous value, but those methods are not pixie dust to be sprinkled on random processes. Instead of assuming that Open Source methods are broadly applicable to the rest of the world, we can instead assume that they are narrowly applicable, but so valuable that it is worth transforming other kinds of work, in order to take advantage of the tools and techniques pioneered here.

If I have one complaint about the book, it is that it is already somewhat dated, despite having 2005 as its year of publication. Most of the articles appear to have been written a couple of years earlier than the publication date, and sometimes refer in turn to research done even before that. Five or six years is a long time in the fast-moving world of Open Source.

19 June 2008

Seven principles of agile architecture

Filed under: Agile, Symbian — David Wood @ 9:37 pm

Agile software methodologies (associated with names like “Scrum” and “eXtreme Programming”) have historically been primarily adopted within small-team projects. They’ve tended to fare less well on larger projects.

Dean Leffingwell’s book “Scaling Software Agility: Best practices for large enterprises” is the most useful one that I’ve found, on the important topic of how best to apply the deep insights of Agile methodologies in the context of larger development projects. I like the book because it’s clear (easy to read) as well as being profound (well worth reading). I liked the book so much that I invited Dean to come to speak at various training seminars inside Symbian. We’ve learned a great deal from what he’s had to say.

As an active practitioner who carries out regular retrospectives, Dean keeps up a steady stream of new blog articles that capture the evolution of his thinking. Recently, he’s been publishing articles on “Agile architecture”, including a summary article that lists “Seven principles of agile architecture“:

  1. The teams that code the system design the system
  2. Build the simplest architecture that can possibly work
  3. When in doubt, code it out
  4. They build it, they test it
  5. The bigger the system, the longer the runway
  6. System architecture is a role collaboration
  7. There is no monopoly on innovation.

Dean says he’s working on an article that pulls all these ideas together. I’m looking forward to it!

18 June 2008

The dangers of fragmentation

Filed under: fragmentation, Linux, Olswang, Open Source, Symbian — David Wood @ 8:46 am

My comments on mobile Linux fragmentation at the Handsets World event in Berlin were picked up by David Meyer (“Doubts raised over Android fragmentation“) and prompted a response by Andy Rubin, co-founder of Google’s Android team. According to the reports,

On a recent comment by Symbian’s research chief, David Wood, that Android would eventually end up fragmented, Rubin said it’s all part of the open source game.

Raising the example of a carrier traditionally having to wait for a closed platform developer to release the next version of its software to “enable” the carrier to offer new services, Rubin said carriers could just hire a developer internally to speed up that process without waiting any longer.

“If that fragmentation is what [Wood] is talking about, that’s great–let’s do it,” said Rubin.

Assuming these reports are accurate, they fall into the pattern of emphasising the short-term benefits of fragmentation, but de-emphasising the growing downstream compatibility problems of a platform being split into different variants. They make fragmentation sound like fun. But believe me, it’s not!

I noticed the same pattern while watching a panel on Open Source in Mobile at one of the Smartphone Summit events that take place the day before CTIA. The panel moderator posed the question, “Is fragmentation a good or bad thing?” The first few panellists were from consultancies and service providers. “Yes”, they said, smiling, “Fragmentation gives more opportunity for doing things differently – and gives us more work to do.” (I paraphrase, but only slightly.) Then came the turn of a VP from one of the phone manufacturers who have struggled perhaps more than most with the variations and incompatibilities between different mobile Linux software stacks. “Let’s be clear”, came the steely response, “fragmentation is a BAD thing, and we have to solve that problem”.

Luigi Licciardi, EVP of Telecom Italia, made similar remarks at the ‘Open source in Mobile’ conference in Madrid in September 2007. He said that one thing his network operator needs in terms of software platforms which they would consider using in the mobile phones, is ‘backwards compatibility’ – in other words, a certain level of stability. (This sounds simple, but I know from my own experience that backwards compatibility requires deep skill and expertise in the midst of a rapidly changing marketplace.) Moreover, the software platform has to be responsive to the needs of the individual operators: the operator needs to be able to go and talk to a company and say “give us these changes and modifications”. He also said that the platform needs to be open to applications for network connections and end users, but has to be closed to malware. In other words it has got to have a very good security story. (Incidentally, I believe Symbian has uniquely got a very strong security story, with platform security built deep into the operating system.) Finally, he emphasised that “a fragmented Linux is of no interest to operators”.

This topic deserves more attention. Let me share some analysis from a transcript of a talk I gave at the Olswang “Open Source Summit” in London last November:

The point is that there is a great tendency in the mobile phone space for mobile Linux variants to fragment and split. This was first drawn to my attention more than two years ago by Avi Greengart who is a US-based analyst. He said that mobile Linux is the new Unix, meaning that despite the best intentions of all involved, it keeps on going its own separate ways.

So why is that happening? It is happening first of all because fragmentation is easy. This means that you can take the code and do whatever you like with it. But will these changes be brought back inside the main platform? Well I claim that, especially in a fast moving market such as smartphones, integration is hard. The changes tend to be incompatible with each other. Therefore it is my prediction that, on average, mobile Linux will fragment faster than it unifies.

It is true that there are many people who say it is very bad that there are all these different mobile Linux implementations. It is very bad because it has caused great problems for developers: they have to test against so many stacks. These people ask themselves, “Can’t we unify things?” And every few months there is a new group that is formed and says, in effect, “Right, we are going to make a better job of unifying mobile Linux than the last lot, they weren’t doing it fast enough, they weren’t doing it seriously enough, so we are going to change that.” But I see the contrary, that there is a greater tendency to fragment in this space than to unify, and here’s why.

It is always easier and quicker to release a device-specific solution than to invest the extra effort to put that functionality into a reusable platform, and then on into a device. In other words, when you are racing to market, when the market leaders are rushing away from you and leaving more and more clear blue water between you and them, it is much more tempting to say, “well I know we are supposed to be thinking in terms of platform, but just for now I am going to serve the needs of my individual product.”

Interestingly we had the same problem in the early days of Symbian. One of the co-founders of Symbian, Bill Batchelor, coined the phrase “the Symbian Paradox”, which is that we found it hard to put functionality into the platform, rather than just serve it out to eager individual customers via consultancy projects. But we gradually learned how to do that, and we gradually put more and more functionality into the platform, suitable for all customers, and therefore more and more customer projects benefited more widely.

So why is mobile Linux fragmenting in a way that perhaps open source in other areas isn’t fragmenting? First, it is an intense, fast moving industry. Symbian as the market leader, together with our customers, is bringing out significant new functionality every three or four months. So there is no time for other people to take things easy and properly invest in their platforms. They are tempted to cut corners – to the detriment of the platform.

Second, if you look at how some of these consortia are meant to work, they are meant to involve various people contributing source code. If you look at some of their architecture diagrams, you might get one company in say Asia, which is contributing one chunk of software, which is meant to be used by other companies the world over. Well guess what happens in a real life project? Another company, let’s say a company trying to ship a Linux based phone in America, and surprise, surprise, the software doesn’t work, it fails to get FCC approval, it doesn’t meet the network operators’ needs or there are bugs that only show up on the network operators in America. So what do they say? They say to the first group (the people out in Asia) “would you mind stopping what you are doing and come and fix this, we are desperate for this fix for our software”. The group in Asia say, “well we are very sorry, we are struggling hard, and we are behind as well, we would rather prioritise our own projects, if you don’t mind, shipping our own software, debugging it on different networks”.

At this point you may raise the question: isn’t open source meant to be somewhat magical in that you can all just look at it and fix it anyway; you don’t need to get the original person to come and fix it? But here we reach the crux of the matter. The problem is there is just too much code, there are such vast systems, it is not just a few hundred lines, or even a few thousand lines of code, there are hundreds of thousands or even millions of lines of code in these components and they are all interfacing together. So somebody looks at the code and they think, “Oh gosh, it is very complicated”, and they look and they look and they look and eventually they think, “Well if I change this line of code, it will probably work”, but then without realising it they have broken something else. And the project takes ages and ages to progress..

Compare this to the following scenario: some swimmers are so good they can actually swim across the English Channel, they go all the way from England to France. Suppose they now say, “Yes I have cracked swimming, what will I do next? Oh I will swim all the way across the Atlantic, after all, it is just an ocean and I have already swum one ocean, so what is different about another ocean?” Well it is the kind of difference between the places where open source has been doing well already and the broader oceans with all the complications of full telephony in smartphones.

So what happens next in this situation? Eventually one company or another may come up with a fix to the defects they faced. But then they try and put it back in the platform, and the first company very typically disagrees, saying “I don’t like what you have done with our code, you have changed it in a very funny way, it isn’t the way we would have done it”. And so the code fragments – one version with the original company, and the other in the new company. That is how it ends up that mobile Linux fragments more than it unifies.

I say this firstly, because I have contacts in the industry who lead me to believe that this is what happens. Secondly, we have the same pressures inside Symbian, but we have learned how to cope with it. We often get enhancements coming back from one customer engagement project and at first it doesn’t quite fit into the main OS platform, but we have built up the highly specialist skills how to do this integration.

As I mentioned, integration is hard. You need a company that is clearly responsible and capable for that. This company needs to be independent and trustworthy, being motivated – not by any kind of ulterior motive, but by having only one place in the value chain, doing one job only, which is that it is creating large scale customer satisfaction by volume sales of the platform.

16 June 2008

Anticipating the next ten years of smartphone innovation

Filed under: Essay contest, Symbian — David Wood @ 5:17 pm

This June, Symbian is celebrating its tenth anniversary. As someone who has been a core member of Symbian’s executive management team throughout these ten roller-coaster years, I’d like to share some of my personal reflections on the remarkable smartphone innovations that have taken place over that time – and, in that light, to consider what the next ten years may bring.

It was on 24 June 1998 that the formation of Symbian was announced to the world. The industry’s leading phone manufacturers were to cooperate to fund further development of the operating system known at the time as EPOC32 (this name dates from the inception of the OS, four years earlier, inside the UK-based PDA manufacturer Psion). The funding would enable the operating system to power numerous diverse models of advanced mobile phones – known, in virtue of their rich programmability, as “smartphones”. The news echoed far and wide. In time, the funding repaid investors handsomely: more than 200 million Symbian-based smartphones have already been sold, earning our customers substantial profits. It’s not just our direct customers that have benefited: a fertile ecosystem of partner companies is sharing in an ongoing technological and market success.

But there have been many road bumps along the way – and many surprises. Perhaps the biggest surprise was the degree of difficulty in actually bringing smartphones to market. We time and again under-estimated the complexity of the entire evolving smartphone software system – mistakenly thinking that it would take only around 12 months for significant new products to mature, whereas in reality the effort required was often considerably higher. To our dismay, numerous potential world-beating products were cancelled, on account of lengthy gestation periods. Or, when they did reach the market, their window of opportunity had passed, so their sales were disappointing. For each breakthrough Symbian-based phone that set the market alight, there were almost as many others that were shelved, or failed to live up to expectations. For this reason, incidentally, when I see commentators becoming highly excited about the prospects of possible new smartphone operating systems, I prefer to reserve my judgement. I know that, just because an industry giant is behind a new smartphone solution, it does not follow that early expectations will be translated into tangible unit sales. With ever-increasing feature requirements, operator specifications, and usability demands, smartphone software keeps on growing in complexity. It requires tremendous skill to integrate an entire software stack to meet a rapidly evolving target. If you pick a sub-optimal smartphone OS as your starting point, you’ll be storing up more trouble for yourself.

Another surprise was in some of the key characteristics of successful smartphones. In 1998, we failed to anticipate that most mobile phones would eventually contain a high quality digital camera. It was only after several years that we realised that the “top secret” (and therefore rarely discussed) features of forthcoming products from different customers were actually the same – namely an embedded camera application. More recently, the prevalence of smartphones with embedded GPS chips has also been a happy surprise. Mapping and location services are in the process of transforming mobile phones, today, in similar way to their earlier transformation by still and then video cameras. This observation strengthens my faith in the critical importance of openness in a smartphone operating system: the task of the OS provider isn’t to impose a single vision about the future of mobile phones, but is to enable different industry players to experiment, as easily as possible, with bringing their different visions for mobile phones into reality.

As a measure of the progress with smartphone technology, let’s briefly compare the specs of two devices: the Ericsson R380, which was the first successful Symbian-powered smartphone (on sale from September 2000 – and a technological marvel in its day), and the recent best-seller, Nokia’s N95 8GB:

  • The R380 had a black and white touch screen, whereas the N95 screen has 16 million colours
  • The R380 ran circuit switched data over GSM (2G), whereas the N95 runs over HSDPA (3.5G)
  • The R380 supported WAP browsing, whereas the N95 has full-featured web browsing
  • The R380 had only a small number of built-in applications: PIM, and some utilities and games
  • The N95 includes GPS, Bluetooth, wireless LAN, FM radio, a 5 mega-pixel camera, and a set of built-in applications that’s far too long to list here!

Another telling difference between these two time periods is in the number of Symbian smartphone projects in progress (each with significant resources allocated to them). During the first years of Symbian’s existence, the number of different projects could be counted on the fingers of two hands. In contrast, at the end of March 2008, there were no less than 70 distinct smartphone models under development, from all the leading phone manufacturers. That’s a phenomenal pipeline of future market-leading products.

Although smartphones have come a long way in the last ten years, the next ten years are likely to witness even more growth and innovation:

  • Component prices will continue to fall – resulting in smartphones at prices to suit all pockets
  • Quality, performance, and robustness will continue to improve, meaning that the appeal of smartphones extends beyond technology aficionados and early adopters, into the huge mainstream audience of “ordinary users” for whom reliability and usability have pivotal importance
  • Word of mouth will spread the news that phones can have valuable uses other than voice calls and text messages: more and more users are discovering the joys of mobile web interaction, mobile push email, mobile access to personal and business calendars and information, and so on
  • The smartphone ecosystem will continue to devise, develop, and deploy interesting new services for smartphones, addressing all corners of human life and personal need
  • The pipeline of forthcoming new smartphone models will continue to strengthen.

It is no wonder that analysts talk about a time, not so far into the future, when there will be one billion smartphones in use around the world. The software that is at the heart of the majority of these devices will have a good claim to being the most widely used software on the planet. Symbian OS is in the pole position to win that race, but of course, nothing can be taken for granted.

Symbian’s understanding of the probable evolution of smartphones over the decade ahead is guided, first and foremost, by the extraordinary insight we gain from the trusted relationships we have built up and nurtured over many years with the visionaries, leaders, gurus, and countless thoughtful foot soldiers in our customer and partner companies. As the history of Symbian has unfolded, these relationships of “customer intimacy” have deepened and flourished: our customers and partners have seen that we treated their insights and ideas with respect and with due confidentiality – and that has prompted them to share even more of their thinking (their hopes and their fears) about the future of smartphones. In turn, this shapes our extensive roadmap of future enhancements to Symbian OS technology.

To provide additional checks on our thinking about future issues and opportunities for smartphones, Symbian is inaugurating an essay contest, which is open to entries from students at universities throughout the world. Up to ten essays will win a prize of £1000 each – essays need to be submitted before the end of September, and winners will be announced at the Symbian Smartphone Show in October. Essays should address the overall theme of “The next wave of smartphone innovation”. For details of how to enter the contest, see http://www.symbian.com/news/essaycontest/.

As a guide for potential entrants, Symbian has announced a set of six research sub-themes, which are also areas that Symbian believes deserve further investigation in universities or other research institutions:

  1. Device evolution / revolution through 2012-2015: The smartphones of the future are likely to be significantly different from those of today. Although today’s smartphones have tremendous capability, those reaching the market in 2012-2015 are likely to put today’s devices into the shade. What clues are there, about the precise characteristics of these devices?
  2. Improved development and delivery methodologies: The dramatically increasing scale and complexity of smartphone development projects mean that these projects tend to become lengthy and difficult – posing significant commercial challenges.
  3. Success factors for mobile applications and mobile operating systems: What are the factors that significantly impact adoption of mobile software? What can be done to address the factors responsible for low adoption?
  4. Possible breakthrough applications and markets: The search for “killer apps” for smartphones continues. Are there substantial new smartphone application markets waiting to be unlocked by new features at the operating system level?
  5. Possible breakthrough technology improvements: Smartphone applications and services depend on underlying technology, which will come under mounting stress due to increased demands from data, processing, throughput, graphics, and so on.
  6. Improved university collaboration methods: What are the most effective and efficient ways for universities and Symbian to work together?

For lists of questions for each of these sub-themes, see www.symbian.com/news/essaycontest/topics/.

The evolution of the “smartphone” concept itself is particularly important. Whereas successful smartphones have mainly been portrayed so far as “phones first” and as “communications-centric devices”, they are nowadays increasingly being appreciated and celebrated for their computer capabilities. Some of our customers have already been emphasising to end users that their latest devices are “multimedia computers” or even instances of “computer 2.0”. Personally I prefer the name “iPC” (short for “inter-personal computers”) as a likely replacement for “smartphone”. Whereas Symbian’s main technology challenges in the last ten years tended to involve telephony protocols, our main technology challenges of the next ten years will tend to involve concepts from contemporary mainstream computing.

The scale of the future opportunity for iPCs dwarfs that for smartphones, just as the scale of the opportunity for smartphones dwarfed that of the original PDAs. But there’s nothing automatic or easy about this. We’ll have to work just as hard and just as smart in the next ten years, to solve some astonishingly difficult problems, as we’ve ever worked in the past. We’ll need all our wisdom and ingenuity to navigate some radical transitions in both market and technology. Here are just some of the ways in which devices of 2018 will differ from those of 2008.

  • From the WWW to the WWC: Nicholas Carr has written one of the great technology books of 2008. It’s called “The big switch: rewiring the world, from Edison to Google”. With good justification, Carr advances the phrase “world wide computer” to describe what the WWW (world wide web) is becoming: a hugely connected source of massive computing power. Terminals – both PCs and iPCs – are increasingly becoming like sockets, which connect into a grid that provides intelligent services as well as rich data. The consequences of this are hard to foretell, but there will be losers as well as winners. The local intelligence on the iPC will act as a smart portal into a much mightier intelligence that lives on the Internet.
  • Harvesting power from the environment: Efficient usage of limited battery power has been a constant hallmark of Symbian software. With ever greater bandwidth and faster processing speeds, the demands on batteries will become even more pressing. Future iPCs might be able to sidestep this challenge by drawing power from their environment. For example, the BBC recently reported how a contraption connected to a person’s knee can generate enough electricity, reasonably unobtrusively, from just one minute of walking, to power a present-day mobile phone for 30 minutes. Ultra-thin nano-materials that convert ambient light into electricity are another possibility.
  • New paradigms of usability: Given ever larger numbers of applications and greater functionality, no system of hierarchical menus is going to be able to provide users with an “intuitive” or “obvious” guide to using the device. It’s like the way the original listing “Jerry’s Guide to the World Wide Web” – which formed a hierarchically organised set of links, known as “Yahoo” – became replaced by search engines as the generally preferred entry point to the ever richer variety of web pages. For this reason, UIs on iPCs look likely to become driven by intelligent front-end search engines, which respond to user queries by offering seamless choices between both offline and online functionality on their devices. Smart search will be supported by smart recommendations.
  • Short-cutting screen and keyboard: Another drawback of present day smartphones is the relatively fiddly nature of screen and keyboard. How much more convenient if the information in the display could somehow be conveyed directly to the biological brain of the user – and likewise if detectors of brain activity could convert thought patterns into instructions transmitted to the iPC. It sounds mind-boggling, and perhaps that’s what it is, in a literal sense. Nano-technology could make this a reality sooner than we imagine.

If some of these thoughts sparked your interest, I suggest that you bookmark the dates 21-22 October in your diary. That’s when Symbian will bring a host of ecosystem experts together, at the free-to-attend Symbian Smartphone Show, in London. It will be your chance to hear 10 keynote presentations from major industry figures and over 60 seminars led by marketplace experts. You’ll be able to network with over 4000 representatives from major handset vendors, content providers, network operators, and developers. To register, visit smartphoneshow.com. Much of the discussion will focus on the theme, “The next wave of smartphone innovation”. Your contributions will be welcome!

Accelerating Future

Filed under: Anissimov — David Wood @ 4:46 pm

Michael Anissimov writes a blog called “Accelerating Future“. I keep finding well-written articles in it. Here’s just three examples:

  1. A recent piece giving an upbeat, well-supported argument in favour of the transformative potential of molecular nanotechnology – responding (and step-by-step refuting) to a more skeptical assessment by Richard Jones in the recent IEEE special report on The Singularity;
  2. Another recent piece that gently and thoughtfully chides some of the less careful advocates of Singularity-style thinking;
  3. An older, introductory piece with a fascinating and provocative list of technologies that have enormous potential to significantly enhance human life and human society.

For this reason, I’d recommend Michael’s blog as a great watering hole, for anyone who (like me) is interested in the the thoughtful development and application of technology to significantly enhance human mental and physical capabilities.

13 June 2008

It was twenty years ago, today

Filed under: Psion, Symbian — David Wood @ 7:34 am


13th June 1988 – twenty years ago today – was the day I started work at Psion. I arrived at the building at 17 Harcourt Street, with its unimpressive facade that led most visitors to wonder whether they had come to the wrong place. When the photo on the left was taken, the premises were used by Symbian, and a “Symbian” sign had been affixed outside. But on my first visits, I noticed no signage at all – although I later discovered the letters of the word “Psion” barely visible in faded yellow paint.

Unimpressive from the outside, the building looked completely different on the inside. Everyone joked about the “tardis effect” – since it seemed impossible for such a small exterior to front a considerably larger interior. In fact, Psion had constructed a set of offices running behind several of the houses in the street – but planning regulations had prevented any change in the house fronts themselves. Apparently, as grade two listed buildings, their original exteriors could not be altered. Or so the story went.

I worked under the guidance of Richard Harrison and Charles Davies on software to be included in a word processor application on Psion’s forthcoming “Mobile Computer” laptop device. My very first programming task was an optimised Find routine. After two weeks, I found myself thinking to myself, “Don’t these people realise I’m capable of working harder?” But I soon had more than enough tough software tasks to handle, and I’ve spent the next twenty years very far from a state of boredom. On the contrary, it’s been a roller-coaster adventure.

Back in 1988, the software development team in Harcourt Street had fewer than 20 people in it. Eight years later, when Psion Software was formed as a separate business unit, there were 88 in the team – which, by that time, also occupied floors in the nearby Sentinel House. Two more years saw the headcount grow to 155 by the time Psion Software turned into Symbian (24 June 1998). Today, our headcount is around 1600. It’s a growth I could not imagine during my first few years of work. Nor could I imagine that descendants of the software from the venerable “Mobile Computer” (MC400) would be powering hundreds of millions of smartphones worldwide.

(You can read more about the long and interesting evolution of Psion’s software team, in my book “Symbian for software leaders: principles of successful smartphone development projects“.)

« Newer PostsOlder Posts »

Blog at WordPress.com.