dw2

29 June 2008

The enhancement of the dream

Filed under: collaboration, Psion, Symbian Story — David Wood @ 12:49 pm

Did this week’s announcements about the Symbian Foundation herald “The end of the dream“, as Michael Mace suggests?

No matter how it works out in the long run, the purchase of Symbian by Nokia marks the end of a dream — the creation of a new independent OS company to be the mobile equivalent of Microsoft. Put a few beers into former Symbian employees and they’ll get a little wistful about it, but the company they talk about most often is Psion, the PDA company that spawned Symbian. …

What makes the Psion story different is that many of the Psion veterans had to leave the UK, or join non-UK companies, in order to become successful. Some are in other parts of Europe, some are in the US, and some are in London but working for foreign companies. This is a source of intense frustration to the Psion folks I’ve talked with. They feel like not only their company failed, but their country failed to take advantage of the expertise they had built.

I understand the thrust of this argument, but I take a different point of view. Rather than seeing this week’s announcement as “the end of the dream”, I see it as enabling “the enhancement of the dream”.

During the second half of 2007, Symbian’s executive team led a company-wide exercise to find a set of evocative, compelling words that captured what we called “The Symbian Story”. Some of the words we came up with were new, but the sentiment they conveyed was widely recognised as deriving from the deep historic roots of the company. Here are some extracts:

  • The world is seeing a revolution in smarter mobile devices
  • Convergence is real, happening now and coming to everyone, everywhere
  • Our mission is to be the OS chosen for the converged mobile world
  • No one else can seize it like we can
  • Our talented people, building highly complex software, have established a smartphone OS that leads the industry
  • We welcome rapid change as the way to stay ahead
  • We’ll work together to fulfill our potential to be the most widely used software on the planet, at the heart of an inspiring, exciting and rewarding success story.

This story – which we might also call a dream, or a vision – has by no means ended with this week’s announcements. On the contrary, these steps should accelerate the outcome that’s been in our minds for so long. There will be deeper collaboration and swifter innovation – making it even more likely that the Symbian platform will become in due course the most widely used on the planet.

But what about the dream that Symbian (or before it, Psion) could be “the next Microsoft”?

In terms of software influence, and setting de facto standards, this dream still holds. In terms of boosting the productivity and enjoyment of countless people around the world, through the careful deployment of smart software which we write, the dream (again) still holds. In terms of the founders of the company joining the ranks of the very richest people in the world, well, that’s a different story, but that fantasy was never anything like so high in our motivational hierarchy.

What about the demise of “British control” over the software? Does the acquisition of UK-based Symbian by Finland-based Nokia indicate yet another “oh what might have been” for the United Kingdom plc?

Once again, I prefer to take a different viewpoint. In truth, the software team has long ago ceased to be dominated by home-bred British talent. The present Symbian Leadership Team has one person from Holland and one from Norway. 50% of the Research department that I myself head were born overseas (in Russia, Greece, and Canada). And during the Q&A with Symbian’s Nigel Clifford and Nokia’s Kai Oistamo that took place in London at all-hands meetings of Symbian employees on the 24th of June, questions were raised using almost every accent under the sun. So rather than Symbian being a British-run company, it’s better to see us as a global company that happens to be headquartered in London, and which benefits mightily from talent born all over the world.

Not only do we benefit from employees born worldwide, we also benefit (arguably even more highly) from our interactions with customers and partners the world over. As Symbian morphs over the next 6-9 months into a new constellation of organisations (including part that works inside Nokia, and part that has an independent existence as the Symbian Foundation), these collaborative trends should intensify. That’s surely a matter for celebration, not for remorse.

The five laws of fragmentation

Filed under: fragmentation, leadership, Open Source, Symbian Foundation — David Wood @ 9:42 am

As discussion of the potential for the Symbian Foundation gradually heats up, the topic of potential fragmentation of codelines keeps being raised. To try to advance that discussion, I offer five laws of fragmentation:

1. Fragmentation can have very bad consequences

Fragmentation means there’s more than one active version of a software system, and that add-on or plug-in software which works fine on one of these versions fails to work well on other versions. The bad consequences are the extra delays this causes to development projects.

Symbian saw this with the divergence between our v7.0 and v7.0s releases. (The little ‘s’ was sometimes said to stand for “special”, sometimes for “strategic”, and sometimes for “Series 60”.) UIQ phones at the time were based on our v7.0 release. However, the earliest Series 60 devices (such as the Nokia 7650 “Calypso”) had involved considerable custom modifications to the lower levels of the previous Symbian OS release, v6.1, and these turned out to be incompatible with our v7.0. As a pragmatic measure, v7.0s was created, that had all of the new technology features introduced for v7.0, but which kept application-level compatibility with v6.1.

On the one hand, v7.0s was a stunning success: it powered the Nokia 6600 “Calimero” which was by far the largest selling Symbian OS phone to that time. On the other hand, the incompatibilities between v7.0 and v7.0s caused no end of difficulties to developers of add-on or plug-in software for the phones based on these two versions:

  • The incompatibilities weren’t just at the level of UI – UIQ vs. Series 60
  • There were also incompatibilities at many lower levels of the software plumbing – including substantial differences in implementation of the “TSY” system for telephony plug-ins
  • There were even differences in the development tools that had to be used.

As a result, integration projects for new phones based on each of these releases ran into many delays and difficulties.

Symbian OS v8 was therefore designed as the “unification release”, seeking as much compatibility as possible with both of the previous branches of codeline. It made things considerably better – but some incompatibilities still remained.

As another example, I could write about the distress caused to the Symbian partner ecosystem by the big change in APIs moving from v8 to v9 (changes due mainly to the new PlatSec system for platform security). More than one very senior manager inside our customer companies subsequently urged us in very blunt language, “Don’t f****** break compatibility like that ever again!”

Looking outside the Symbian world, I note the following similar (but more polite) observation in the recent Wall Street Journal article, “Google’s Mobile-Handset Plans Are Slowed“:

Others developers cite hassles of creating programs while Android is still being completed [that is, while it is undergoing change]. One is Louis Gump, vice president of mobile for Weather Channel Interactive, which has built an Android-based mobile weather application. Overall, he says, he has been impressed by the Google software, which has enabled his company to build features such as the ability to look up the weather in a particular neighborhood.

But he says Weather Channel has had to “rewrite a few things” so far, and Google’s most recent revision of Android “is going to require some significant work,” he says.

2. Open Source makes fragmentation easier

If rule 1 was obvious (even though some open source over-enthusiasts seem to be a bit blind to it), rule 2 should be even clearer. Access to the source code for a system (along with the ability to rebuild the system) makes it easier for people to change that software system, in order to smooth their own development purposes. If the platform doesn’t meet a particular requirement of a product that is being built from that platform, hey, you can roll up your sleeves and change the platform. So the trunk platform stays on v2.0 (say) while your branch effectively defines a new version v2.0s (say). That’s one of the beauties of open source. But it can also be the prelude to fragmentation and all the pain which will ensue.

The interesting question about open source is to figure out the circumstances in which fragmentation (also known as “forking”) occurs, and when it doesn’t.

3. Fragmentation can’t be avoided simply by picking the right contract

Various license contracts for open source software specify circumstances in which changes made by users of an open source platform need to be supplied back into the platform. Different contracts specify different conditions, and this can provoke lengthy discussions. However, for the moment, I want to sidestep these discussions and point out that contractual obligations, by themselves, cannot cure all fragmentation tendencies:

  • Even when users of a platform are obligated to return their changes to the platform, and do so, it’s no guarantee that the platform maintainers will adopt these changes
  • The platform maintainers may dislike the changes made by a particular user, and reject them
  • Although a set of changes may make good sense for one set of users, they may involve compromises or optimisations that would be unacceptable to other users of the platform
  • Reasons for divergence might include use of different hardware, running on different networks, the need to support specific add-on software, and so on.

4. The best guarantee against platform fragmentation is powerful platform leadership

Platform fragmentation has some similarities with broader examples of fragmentation. What makes some groups of people pull together for productive collaboration, whereas in other groups, people diverge following their own individual agendas? All societies need both cooperation and competition, but when does the balance tilt too far towards competition?

A portion of the answer is the culture of the society – as reflected in part in its legal framework. But another big portion of the answer is in the quality of the leadership shown in a society. Do people in the group believe that the leaders of the group can be relied on, to keep on “doing the right thing”? Or are the leaders seen as potentially misguided or incompetent?

Turning back to software, users of a platform will be likely to stick with the platform (rather than forking it in any significant way) if they have confidence that the people maintaining the trunk of the platform are:

  1. well-motivated, for the sake of the ecosystem as a whole
  2. competent at quickly and regularly making valuable new high quality releases that (again) meet the needs of the ecosystem as a whole.

Both the “character” (point 1) and the “competence” (point 2) are important here. As Stephen Covey (both father and son) have repeatedly emphasised, you can’t get good trust without having both good character and good competence.

5. The less mature the platform the more likely it will be to fragment, especially if there’s a diverse customer base

If a platform is undergoing significant change, users can reason that it’s unlikely to coalese any time soon into a viable new release, and they’ll be more inclined to carry on working with their own side version of the platform, rather than waiting for what could be a long time for the evolving trunk of the platform to meet their own particular needs.

This tendency is increased if there are diverse customers, who each have their own differing expectations and demands for the still-immature software platform.

In contrast, if the core of the platform is rock-solid, and changes are being carefully controlled to well-defined areas within the platform, customers will be more likely to want to align their changes with the platform, rather than working independently. Customers will reason that:

  • The platform is likely to issue a series of valuable updates, over the months and years ahead
  • If I diverge from the platform, it will probably be hard, later on, to merge the new platform release material into my own fork
  • That is, if I diverge from the platform, I may gain short-term benefit, but then I’ll likely miss out on all the good innovation that subsequent platform releases will contain
  • So I’d better work closely with the developers of the trunk of the platform, rather than allowing my team to diverge from it.

Footnote: Personally I see the Symbian Foundation codeline to be considerably more mature (tried and tested in numerous successful smartphones) than the codeline in any roughly similar mobile phone oriented Linux-based foundation. That’s why I expect that the Symbian Foundation codeline will fall under less fragmentation pressure. I also believe that Symbian’s well-established software development processes (such as careful roadmap management, compatibility management, system architecture review, modular design, overnight builds, peer reviews, and systematic and extensive regression testing) are set to transfer smoothly into this new and exciting world, maintaining our track record of predictable high-quality releases – further lessening the risks of fragmentation.

27 June 2008

Aubrey de Grey’s preposterous campaign to cure aging

Filed under: Methuselah, UKTA — David Wood @ 6:39 am


At first sight, Aubrey de Grey is clearly preposterous. Not only does he look like a relic of the middle ages, with his huge long beard, but his ideas on potentially “curing aging” within the present generation apparently run counter to many well-established principles of science, society, philosophy, and even religion. So it’s no surprise that his ideas arouse some fervent opposition. See for example a selection of the online comments to the article about him, “The Fight to End Aging Gains Legitimacy, Funding“, in today’s Wired:

Guess what, jackasses… we’re supposed to die! Look up the 2nd law of thermodynamics, you might learn something. We’ve even evolved molecular mechanisms to make sure our cells can’t reproduce beyond a certain point… check out “Hayflick limit” on Wikipedia. The stark biological reality is that we are here to pass along our genes to our progeny and the DIE. What the hell, wasn’t this settled back in the 1800s? Why are we debating this stupidity?

and

Aging and death is an evolutionary response to cancer in mammals. You’ll have to resolve the cancer issue (and remember kids – cancer is actually a whole lot of different but related diseases) before you can resolve the aging and death issue.

However, first appearances can be deceptive. I had my own first serious discussions with Aubrey at the “Tomorrow’s People” conference in Oxford in March 2006. Not only did I pose my own questions, I listened and observed with increasing admiration as Aubrey addressed issues posed by other audience members, and during many coffee breaks as the conference progressed. Later that year in August, at Transvision 2006 in Helsinki (by the way, as well as being home to the world’s leading mobile phone manufacturer, Finland hosts a disproportionate number of self-described transhumanists; perhaps both reflect an unusually pragmatic yet rational approach to life), I had the chance to continue these discussions and observations. I saw that Aubrey has good, plausible answers to his critics. You can find many of these answers on his extensive website.

Since that time, I’ve been keen to take the opportunity to watch Aubrey speak whenever it arises. Unfortunately, I’ll miss the conference that’s happening at UCLA this weekend: “AGING: The Disease – The Cure – The Implications” – which has a session this afternoon (4pm West Coast time) that’s open to the general public. However, I’m eagerly looking forward to some good debate at the July 12 meeting of the UKTA, at Birkbeck College in London, where Aubrey will be one of the speakers on the topic, “Living longer and longer yet healthier and healthier: realistic grounds for hope?”. (If you’re interested to attend that, and you Facebook, you can indicate your interest and RSVP here.)

As I’ve come to see it, addressing aging by the smart and imaginative uses of technology fits well with the whole programme of medicine (which constantly intervenes to prevent nature taking its “natural toll” on the human body). It also has some surprising potential cost-saving benefits, as aging-related diseases are responsible for a very significant part of national health expenditure. But that’s only the start of the argument. To help explore many of the technical byways of this argument, I strongly recommend Aubrey’s 2007 book, “Ending Aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime“.

In terms of disruptive technology trends (some of which I study in my day job), this is about as big as it gets.

I’ll end by quoting from today’s Wired article:

“In perhaps seven or eight years, we’ll be able to take mice already in middle age and treble their lifespan just by giving them a whole bunch of therapies that rejuvenate them,” de Grey said. “Gerontologists all over, even my most strident critics, will say yes, Aubrey de Grey is right.”

Even as he imagines completing Gandhi’s fourth step, de Grey always keeps his eye on the ultimate prize — the day when the aging-as-disease meme reaches the tipping point necessary to funnel really big money into the field.

“The following day, Oprah Winfrey will be saying, aging is a disease and let’s fix it right now,” de Grey said.

25 June 2008

A tale of two meetings

Filed under: climate change, collaboration, Nuclear energy, SitP, solar energy, Spiked — David Wood @ 10:31 pm

In the past, I’ve enjoyed several meetings of the London Skeptics in the Pub (“SitP”). More than 100 people cram into the basement meeting space of Penderel’s Oak in Holborn, and listen to a speaker cover a contentious topic – such as alternative medicine, investigating the paranormal, the “moon landings hoax”. What’s typically really enjoyable is the extended Q&A session in the second half of the meeting, when the audience often dissect the speaker’s viewpoint. Attendee numbers have crept up steadily over the nine years the group has existed. It’s little surprise that the group was voted into the Top Ten London Communities 2008 by Time Out.

Last night, the billed speaker was the renowned (many would say “infamous”) climate change denier, Fred Singer. The talk was advertised as follows:

Global Warming: Science, Economics, and some Moral Issues: What Al Gore Never Told You.

The science is settled: Evidence clearly demonstrates that carbon dioxide contributes insignificantly to Global Warming and is therefore not a ‘pollutant.’ This fact has not yet been widely recognized, and irrational Global Warming fears continue to distort energy policies and foreign policy. All efforts to curtail CO2 emissions, whether global, federal, or at the state level, are pointless — and in any case, ineffective and very costly. On the whole, a warmer climate is beneficial. Fred will comment on the vast number of implications.

Since this viewpoint is so far removed from consensus scientific thinking, I was hoping for a cracking debate. And indeed, the evening started well. Singer turned out to be a better speaker than I expected. Even though he’s well into his 80s, he spoke with confidence, courtesy, and good humour. And he had some interesting material:

  • A graph that seemed to show that global temperature has not been rising over the last ten years (even though atmospheric CO2 has incontrovertibly been rising over that time period)
  • A claim that all scientific models of atmospheric warming are significantly at variance with observed data (and therefore, we shouldn’t give these models much credence)
  • Suggestions that global warming is more strongly influenced by cosmic rays than by atmospheric CO2.

(The contents of the talk were similar to what’s in this online article.)

So I eagerly anticipated the Q&A. But oh, what a disappointment. I found myself more and more frustrated:

  • Quite a few of the audience members seemed incapable of asking a clear, relevant, concise question. Instead, they tended to go off on tangents, or went round and round in circles. (To my mind, the ability to identify and ask the key question, without distraction, is an absolutely vital skill for the modern age.)
  • Alas, the speaker could not hear the questions (being, I guess, slightly deaf from his advanced age); so they had to be repeated by the meeting moderator, who was standing at the front next to the speaker
  • The moderator often struggled to capture the question from what the audience member had said, so there were several iterations here
  • Then the speaker frequently took a LONG time to answer the question. (He was patient and polite, but he was also painstakingly SLOW.)

Result: lots of time wasted, in my view. No one landed anything like a decisive refutation of the speaker’s claims. There were lots of good questions that should have been asked, but time didn’t allow it. I also blamed myself, for not having done any research prior to the meeting (but I had been pretty busy on other matters for the last few days), and for not being able to do my usual trick of looking up information on my smartphone during a meeting (via Google, Wikipedia, etc) because network reception was very poor in the part of the basement where I was standing. In conclusion, although the discussion was fun, I don’t think we got anything like the best possible discussion that the speakers’ presentation deserved.

I mention all this, not just because I’m deeply concerned about the fearsome prospects of runaway global warming, but also because I’m interested in the general question of how to organise constructive debates that manage to reach to the heart of the matter (whatever the matter is).

As an example of a meeting that did have a much better debate, let me mention the one I attended this evening. It was hosted by Spiked, and was advertised as follows:

Nuclear power: what’s the alternative? The future of energy in Britain

As we seek to overcome our reliance on fossil fuels, what are the alternatives? Offshore turbines and wind farms are often cited as options but can they really meet more than a fraction of the UK’s energy needs? If not, is nuclear power a viable alternative? Public anxieties about nuclear plants’ safety, their susceptibility to terrorist attacks, and the problem of safely disposing of radioactive waste persist. But to what extent are these concerns justified? Is the real issue the public’s perception of both the risks and potential of nuclear energy? Ultimately, does nuclear energy, be it the promise of fusion or the reality of fission, finally mean we can stop guilt-tripping about energy consumption?

Instead of just one speaker, there were five, who had a range of well-argued but differing viewpoints. And the chairperson, Timandra Harkness (Director of Cheltenham Science Festival’s Fame Lab) was first class:

  • She made it clear that each speaker was restricted to 7 minutes for their opening speech (and they all kept to this limit, with good outcomes: focus can have wonderful results)
  • Then there were around half a dozen questions from the floor, asked one after the other, before the speaker panel were invited to reply
  • There were several more rounds of batched up questions followed by responses
  • Because of the format, the speakers had the option of ignoring the (few) irrelevant questions, and could concentrate on the really interesting ones.

For the record, I thought that all the speakers made good points, but Keith Barnham, co-founder of the solar cell manufacturing company Quantasol, was particularly interesting, with his claims for the potential of new generation photovoltaic concentrator solar cells. (This topic also featured in a engrossing recent Time article.) He recommended that we put our collective hope for near-future power generation “in the [silicon] industry that gave us the laptop and the mobile phone, rather than the industry that gave us Chernobyl and Sellafield”. (Ouch!) Advances in silicon have time and again driven down the prices of mobile phones; these benefits will also come quickly (Barnham claimed) to the new generation solar cells.

But the conclusion I want to draw is that the best way to ensure a great debate is to have a selection of speakers with complementary views, to insist on focus, and to chair the meeting particularly well. Yes, collaboration is hard – but when it works, it’s really worth it!

Footnote: the comparision between the Skeptics in the Pub meeting and the Spiked one is of course grossly unfair, since the former is run on a shoestring (there’s a £2 charge to attend) whereas the latter has a larger apparatus behind it (the entry charge was £10, payable in advance; and there’s corporate sponsorship from Clarke Mulder Purdie). But hey, I still think there are valid learnings from this tale of two different meetings – each interesting and a good use of time, but one ultimately proving much more satisfactory than the other.

24 June 2008

Symbian 2-0

Filed under: Nokia, Open Source, Symbian Foundation — David Wood @ 6:13 am

Months of planning culminated this morning with the announcement of an intended dramatic evolution for Symbian – an evolution that should decisively advance the Symbian platform toward its long-anticipated status of being the most widely used software platform on the planet.

The announcement of the Symbian Foundation comes on the very first day of the second decade of Symbian’s existence. It also sets the scene for a much wider participation by companies and individuals in the development and deployment of novel services and applications for all sorts of new and improved Symbian-powered mobile devices. Because this second decade of Symbian’s history should witness radically greater collaboration than before, the designation “Symbian 2.0” seems doubly apt.

Subject to the deal receiving regulatory approval, I envision a whole series of far-reaching changes to take place in the months and years ahead:

  • It will become possible for the best practices of Open Source Software to be applied in and around the software platform that is the most suited to smartphones
  • Closer working relations between personnel from Symbian and S60 teams will result in more efficient development, accelerating the rate at which the overall platform improves
  • The lower barriers to usage of the Symbian platform should mean that the number of customers and partners will rocket
  • The unification of the formerly separate UI systems will further increase the attractiveness of the new platform
  • The platform will be royalty free – which will be another factor to boost usage
  • Because of increased adoption of the platform, the ecosystem will also grow, through the OS-ES volume-value virtuous cycle mechanism
  • For all these reasons, smartphone innovation should jump forward in pace, to the potential benefit of all participants in the ever expanding, ever richer, converged mobile industry
    Customers and partners alike – both newcomers and old-timers – will be on the lookout for fresh options for differentiation and support
  • In short, there will be lots of new opportunities for people with knowledge of the Symbian platform.

Great credit is due to Symbian’s shareholders, and especially to Nokia, for enabling and driving this bold and powerful initiative.

Of course, with such a large change, there’s considerable uncertainty about how everything will work out. Many people will be unsure exactly where they, personally, will end up in this new world. Lots of details remain to be decided. But the basic direction is clear: participants in the Symbian 2.0 ecosystem will be part of a much bigger game than before. It’s going to be exciting – and no doubt somewhat scary too. As Symbian’s first CEO, Colly Myers, used to say, “Let’s rock and roll!”

Postscript: For a clear rationale of some key aspects of the Symbian Foundation plan, take a look at what my Symbian colleague John Forsyth has to say, here.

23 June 2008

Fragmentation is easy, integration is hard

Filed under: Android, fragmentation, integration — David Wood @ 2:13 pm

The Wall Street Journal reports today that “Google’s Mobile-Handset Plans Are Slowed“. The Inquirer picks up the story and adds a few choice morsels of its own: “Depressing news as Google’s Android delayed“:

However, life’s little crises just kept getting the Android down and now apparently some mobile network operators like Sprint Nextel, have abandoned any attempt to get an Android on the market until 2009. This is purportedly because the majority of Google’s attention and resources have been going to Sprint’s competitor T-Mobile USA, who still hope to have an Android mobile out by the end of Q4. We have it on good authority (from un-named sources of course) that Sprint actually asked Google “Do you want me to sit in the corner and rust, or just fall apart where I’m standing?”…

Director of mobile platforms at Google, Andy Rubin, gloomily noted that trying to develop software while the company’s irritating partners kept pushing for new features, was a time-consuming task. “This is where the pain happens”, he sighed.

I recognise this pain. It’s something that has occurred many times during Symbian’s history. That’s why I’ve emphasised a dilemma facing Android: Fragmentation is easy, but integration is hard. Coping with multiple forceful customers at the same time, while your codebase is still immature, is a really tough challenge. Glitzy demos of v2 features don’t help matters: they drive up interest that needs to be deflated, as you have to explain to customers that, no, these features aren’t ready to include in the software package for their next phones, despite looking brilliant on YouTube. Instead, the focus needs to go on the hard, hard task of integration.

22 June 2008

Reasons why humans will be smarter by 2020

Filed under: Flynn effect, IA, smartphones — David Wood @ 9:51 pm

Alvis Brigis has published a provocative article on FutureBlogger, “How smart will humans be by 2020?” The article looks at technology and social trends which can provide so-called IA – “Intelligence Amplification“. (IA is sometimes expanded, instead, to “Intelligence Augmentation”.)

Alvis produces a compelling list of intelligence-amplifying trends:

  • Widening bandwidth (Faster internet connections, pervasive WiFi…)
  • Growing global information
  • Evolving social media (including Wikipedia…)
  • Video-to-video chat
  • Evolving 3D and immersive media (including Second Life, Google Earth, and GTA4)
  • Better search
  • New interface products (including touchscreens, mini-projectors, haptic feedback…)
  • Improved portable battery power
  • Time-savers (such as robots and more efficient machines)
  • Translators (akin to the Babelfish of HHGG)
  • Rising value of attention (including more relevant targeted ads)
  • Direct brain-computer interfaces
  • Health benefits (from advances in nanotech, biology, pharma, etc).

One reason I’m personally very positive about smartphones is that I believe in their potential to “make their users smarter”. I’ve often said, only half-joking, that I view my Psion Series 5mx PDA as my “second brain”, and my current smartphone as my “third brain”. Convenient and swift access to information from any location, whenever the need arises, is only part of the benefit. The organiser capabilities can play a big role too – as does the connectivity to people and communities (rather than just to information stores). So in my mind, the potential of smartphones includes people who increasingly:

  • Know what’s important
  • Know what they want to achieve in life
  • Know how to get it.

PS For wider thoughts about the reasons for improved intelligence, see this recent interview by Alvis Brigis of James Flynn (the discoverer of what has come to be known as the “Flynn effect“)

PPS I’d like to include the FutureBlogger posts in my blogroll, right, but everytime I feed Blogger the URL http://www.memebox.com/futureblogger to include in the blogroll, it gets changed into a link to a different blog. Does anyone know how to fix this?

21 June 2008

Open minds about open source

Filed under: Open Source — David Wood @ 3:55 pm

There’s been a surprising amount of heat (not to mention vitriol) in the responses to recent blog postings from Ari Jaaksi of Nokia on the topic of the potential mutual benefits of a constructive encounter between Open Source developers and the companies who make money from mobile telephony.

Ari’s message (in “Some learning to do?“, and again in “Good comments from Bruce“) is that there’s a need for two-way learning, and for open minds. To me, that seems eminently sensible. This topic has so many angles (and is changing so quickly) that we shouldn’t expect anyone to have a complete set of answers in place. But quite a few online responses take a different stance, basically saying that there’s nothing for Open Source developers to learn – they know it all already – and that any movement must be on the side of the mobile phone business companies. The mountain will have to come to Mohammed.

At the same time as I’ve been watching that debate (with growing disbelief), I’ve been thumbing my way through the 500+ page book “Perspectives on Free and Open Source Software”. This book contains 24 chapters (all written by different authors), one introduction (by the joint editors of the book: Joseph Feller, Brian Fitzgerald, Scott Hissam, and Karim Lakhani), one foreword (by Michael Cusumano), and one epilogue (by Clay Shirky). The writers range in their attitudes toward Open Source, all the way from strong enthusiasm to considerable scepticism. They’ve all got interesting things to say. But they have several things in common (which sets them apart from the zealotry in the online blog responses):

  • An interest to find and then examine data and facts
  • A willingness to engage in dialog and debate
  • A belief that Open Source is now well established, and won’t be disappearing – but also a belief that this observation is only the beginning of the discussion, rather than the end.

Another thing I like about the book is the way the Introduction sets out a handy list of questions, which readers are asked to keep in their minds as they review the various chapters. This makes it clear, again, that there’s still a lot to be worked out, regarding the circumstances in which Open Source is a good solution to particular technical challenges.

It’s a bit unfair to try to summarise 500+ pages in just a few paragraphs, but the following short extracts give a good flavour in my view. From Michael Cusumano’s introduction:

Most of the evidence in this book suggests that Open Source methods and tools resemble what we see in the commercial sector and do not themselves result in higher quality. There is good, bad, and average software code in all software products. Not all Open Source programmers write neat, elegant software modules, and then carefully test as well as document their code. Moreover, how many “eyeballs” actually view an average piece of Open Source code? Not as many as Eric Raymond would have us believe.

After reading the diverse chapters in this book, I remain fascinated but still skeptical about how important Open Source will be in the long run and whether, as a movement, it is raising unwarranted excitement among users as well as entrepreneurs and investors…

The conclusion I reach … is that the software world is diverse as well as fascinating in its contrasts. Most likely, software users will continue to see a co-mingling of free, Open Source, and proprietary software products for as far as the eye can see. Open Source will force some software products companies to drop their prices or drop out of commercial viability, but other products and companies will appear. The business of selling software products will live on, along with free and Open Source programs.

And from Clay Shirky’s epilogue:

Open Source methods can create tremendous value, but those methods are not pixie dust to be sprinkled on random processes. Instead of assuming that Open Source methods are broadly applicable to the rest of the world, we can instead assume that they are narrowly applicable, but so valuable that it is worth transforming other kinds of work, in order to take advantage of the tools and techniques pioneered here.

If I have one complaint about the book, it is that it is already somewhat dated, despite having 2005 as its year of publication. Most of the articles appear to have been written a couple of years earlier than the publication date, and sometimes refer in turn to research done even before that. Five or six years is a long time in the fast-moving world of Open Source.

19 June 2008

Seven principles of agile architecture

Filed under: Agile, Symbian — David Wood @ 9:37 pm

Agile software methodologies (associated with names like “Scrum” and “eXtreme Programming”) have historically been primarily adopted within small-team projects. They’ve tended to fare less well on larger projects.

Dean Leffingwell’s book “Scaling Software Agility: Best practices for large enterprises” is the most useful one that I’ve found, on the important topic of how best to apply the deep insights of Agile methodologies in the context of larger development projects. I like the book because it’s clear (easy to read) as well as being profound (well worth reading). I liked the book so much that I invited Dean to come to speak at various training seminars inside Symbian. We’ve learned a great deal from what he’s had to say.

As an active practitioner who carries out regular retrospectives, Dean keeps up a steady stream of new blog articles that capture the evolution of his thinking. Recently, he’s been publishing articles on “Agile architecture”, including a summary article that lists “Seven principles of agile architecture“:

  1. The teams that code the system design the system
  2. Build the simplest architecture that can possibly work
  3. When in doubt, code it out
  4. They build it, they test it
  5. The bigger the system, the longer the runway
  6. System architecture is a role collaboration
  7. There is no monopoly on innovation.

Dean says he’s working on an article that pulls all these ideas together. I’m looking forward to it!

18 June 2008

The dangers of fragmentation

Filed under: fragmentation, Linux, Olswang, Open Source, Symbian — David Wood @ 8:46 am

My comments on mobile Linux fragmentation at the Handsets World event in Berlin were picked up by David Meyer (“Doubts raised over Android fragmentation“) and prompted a response by Andy Rubin, co-founder of Google’s Android team. According to the reports,

On a recent comment by Symbian’s research chief, David Wood, that Android would eventually end up fragmented, Rubin said it’s all part of the open source game.

Raising the example of a carrier traditionally having to wait for a closed platform developer to release the next version of its software to “enable” the carrier to offer new services, Rubin said carriers could just hire a developer internally to speed up that process without waiting any longer.

“If that fragmentation is what [Wood] is talking about, that’s great–let’s do it,” said Rubin.

Assuming these reports are accurate, they fall into the pattern of emphasising the short-term benefits of fragmentation, but de-emphasising the growing downstream compatibility problems of a platform being split into different variants. They make fragmentation sound like fun. But believe me, it’s not!

I noticed the same pattern while watching a panel on Open Source in Mobile at one of the Smartphone Summit events that take place the day before CTIA. The panel moderator posed the question, “Is fragmentation a good or bad thing?” The first few panellists were from consultancies and service providers. “Yes”, they said, smiling, “Fragmentation gives more opportunity for doing things differently – and gives us more work to do.” (I paraphrase, but only slightly.) Then came the turn of a VP from one of the phone manufacturers who have struggled perhaps more than most with the variations and incompatibilities between different mobile Linux software stacks. “Let’s be clear”, came the steely response, “fragmentation is a BAD thing, and we have to solve that problem”.

Luigi Licciardi, EVP of Telecom Italia, made similar remarks at the ‘Open source in Mobile’ conference in Madrid in September 2007. He said that one thing his network operator needs in terms of software platforms which they would consider using in the mobile phones, is ‘backwards compatibility’ – in other words, a certain level of stability. (This sounds simple, but I know from my own experience that backwards compatibility requires deep skill and expertise in the midst of a rapidly changing marketplace.) Moreover, the software platform has to be responsive to the needs of the individual operators: the operator needs to be able to go and talk to a company and say “give us these changes and modifications”. He also said that the platform needs to be open to applications for network connections and end users, but has to be closed to malware. In other words it has got to have a very good security story. (Incidentally, I believe Symbian has uniquely got a very strong security story, with platform security built deep into the operating system.) Finally, he emphasised that “a fragmented Linux is of no interest to operators”.

This topic deserves more attention. Let me share some analysis from a transcript of a talk I gave at the Olswang “Open Source Summit” in London last November:

The point is that there is a great tendency in the mobile phone space for mobile Linux variants to fragment and split. This was first drawn to my attention more than two years ago by Avi Greengart who is a US-based analyst. He said that mobile Linux is the new Unix, meaning that despite the best intentions of all involved, it keeps on going its own separate ways.

So why is that happening? It is happening first of all because fragmentation is easy. This means that you can take the code and do whatever you like with it. But will these changes be brought back inside the main platform? Well I claim that, especially in a fast moving market such as smartphones, integration is hard. The changes tend to be incompatible with each other. Therefore it is my prediction that, on average, mobile Linux will fragment faster than it unifies.

It is true that there are many people who say it is very bad that there are all these different mobile Linux implementations. It is very bad because it has caused great problems for developers: they have to test against so many stacks. These people ask themselves, “Can’t we unify things?” And every few months there is a new group that is formed and says, in effect, “Right, we are going to make a better job of unifying mobile Linux than the last lot, they weren’t doing it fast enough, they weren’t doing it seriously enough, so we are going to change that.” But I see the contrary, that there is a greater tendency to fragment in this space than to unify, and here’s why.

It is always easier and quicker to release a device-specific solution than to invest the extra effort to put that functionality into a reusable platform, and then on into a device. In other words, when you are racing to market, when the market leaders are rushing away from you and leaving more and more clear blue water between you and them, it is much more tempting to say, “well I know we are supposed to be thinking in terms of platform, but just for now I am going to serve the needs of my individual product.”

Interestingly we had the same problem in the early days of Symbian. One of the co-founders of Symbian, Bill Batchelor, coined the phrase “the Symbian Paradox”, which is that we found it hard to put functionality into the platform, rather than just serve it out to eager individual customers via consultancy projects. But we gradually learned how to do that, and we gradually put more and more functionality into the platform, suitable for all customers, and therefore more and more customer projects benefited more widely.

So why is mobile Linux fragmenting in a way that perhaps open source in other areas isn’t fragmenting? First, it is an intense, fast moving industry. Symbian as the market leader, together with our customers, is bringing out significant new functionality every three or four months. So there is no time for other people to take things easy and properly invest in their platforms. They are tempted to cut corners – to the detriment of the platform.

Second, if you look at how some of these consortia are meant to work, they are meant to involve various people contributing source code. If you look at some of their architecture diagrams, you might get one company in say Asia, which is contributing one chunk of software, which is meant to be used by other companies the world over. Well guess what happens in a real life project? Another company, let’s say a company trying to ship a Linux based phone in America, and surprise, surprise, the software doesn’t work, it fails to get FCC approval, it doesn’t meet the network operators’ needs or there are bugs that only show up on the network operators in America. So what do they say? They say to the first group (the people out in Asia) “would you mind stopping what you are doing and come and fix this, we are desperate for this fix for our software”. The group in Asia say, “well we are very sorry, we are struggling hard, and we are behind as well, we would rather prioritise our own projects, if you don’t mind, shipping our own software, debugging it on different networks”.

At this point you may raise the question: isn’t open source meant to be somewhat magical in that you can all just look at it and fix it anyway; you don’t need to get the original person to come and fix it? But here we reach the crux of the matter. The problem is there is just too much code, there are such vast systems, it is not just a few hundred lines, or even a few thousand lines of code, there are hundreds of thousands or even millions of lines of code in these components and they are all interfacing together. So somebody looks at the code and they think, “Oh gosh, it is very complicated”, and they look and they look and they look and eventually they think, “Well if I change this line of code, it will probably work”, but then without realising it they have broken something else. And the project takes ages and ages to progress..

Compare this to the following scenario: some swimmers are so good they can actually swim across the English Channel, they go all the way from England to France. Suppose they now say, “Yes I have cracked swimming, what will I do next? Oh I will swim all the way across the Atlantic, after all, it is just an ocean and I have already swum one ocean, so what is different about another ocean?” Well it is the kind of difference between the places where open source has been doing well already and the broader oceans with all the complications of full telephony in smartphones.

So what happens next in this situation? Eventually one company or another may come up with a fix to the defects they faced. But then they try and put it back in the platform, and the first company very typically disagrees, saying “I don’t like what you have done with our code, you have changed it in a very funny way, it isn’t the way we would have done it”. And so the code fragments – one version with the original company, and the other in the new company. That is how it ends up that mobile Linux fragments more than it unifies.

I say this firstly, because I have contacts in the industry who lead me to believe that this is what happens. Secondly, we have the same pressures inside Symbian, but we have learned how to cope with it. We often get enhancements coming back from one customer engagement project and at first it doesn’t quite fit into the main OS platform, but we have built up the highly specialist skills how to do this integration.

As I mentioned, integration is hard. You need a company that is clearly responsible and capable for that. This company needs to be independent and trustworthy, being motivated – not by any kind of ulterior motive, but by having only one place in the value chain, doing one job only, which is that it is creating large scale customer satisfaction by volume sales of the platform.

Older Posts »

Blog at WordPress.com.