dw2

16 September 2008

The practicalities of producing open source software

Filed under: books, Open Source — David Wood @ 10:53 am

I’ve been reading books and articles about open source software since at least June 1998 – the first time the famous phrase “The cathedral and the bazaar” (*) made an appearance in Symbian’s principal internal discussion database (which was, at the time, still called “Psion Software General”). However, I remain keen to keep on testing out my thinking and understanding of open source issues.

After all, there are many different angles to this subject. There’s potential huge upside to taking full advantage of the best principles of open source methods – but there’s also many risks from applying some of these ideas in a misguided manner.

For that reason, I continue to pick up books on open source software and think hard about what they say. Recently, I accepted the advice of several of my Symbian colleagues, and started reading “Producing open source software” by Karl Fogel.

I’m very pleased that I listened to that advice. In my view, this book is in a class of its own.

It has the great merit of being an intensely practical book. It’s clear from numerous examples in the book that the author has extensive real-world experience at the heart of development teams of open source projects that are both significant and successful – including CVS and Subversion.

Some of the chapters contain ideas that have been covered before – like the history of free and open source, advice on how to choose a licence, and aspects of the technical infrastucture of open source projects. However, other chapters delve into material that (to my knowledge) has been covered much less often:

  • Social and political infrastructure of successful open source projects;
  • “Money”: Working in open source projects in which companies are sponsoring parts of the work;
  • “Money can’t buy you love” – excellent advice on how to avoid corporate sponsorship dampening the enthusiasm of volunteer contributors;
  • Communications – including how to communicate with “difficult people”;
  • Packaging, releasing, and daily development – including dealing with different kinds of codeline, and the special considerations that apply to integration and to making releases;
  • Managing volunteers- including the typical roles that probably need to be filled in projects.

Several times while reading a chapter, I found myself thinking: “Yes, this is highly practical material – and interesting too. But I can see there’s still another 20+ pages in this chapter. What else is there to say about this subject?” But then as I turned over more pages, I thought, “Oh yes, this is something that really belongs here too – it’s what happens in real projects!

The writing style was pleasant and clear throughout – with an engaging mix of actual examples (both good and bad) and a discussion of the broader lessons to be drawn from these examples.

My recommendation is that project teams should regard this book as a kind of “bible” – it’s something that should be regularly dipped into, and the many salient points shared and debated in group discussion. The lessons will make sense on several levels – some apply during early phases of projects, and others as the project becomes more complex.

(Many of the same principles apply in non-open source projects too, by the way! I felt lots of resonance with my own observations over the years about successful software projects inside the “community source” world of Symbian OS development.)

Perhaps the most sobering part of the book is contained in its introduction:

Most free projects fail…

… it’s impossible to put a precise number on the failure rate. But anecdotal evidence from over a decade in open source, some casting around on SourceForge.net, and a little Googling all point to the same conclusion: the rate is extremely high, probably on the order of 90–95%. The number climbs higher if you include surviving but dysfunctional projects: those which are producing running code, but which are not pleasant places to be, or are not making progress as quickly or as dependably as they could.

Happily, the introduction continues:

This book is about avoiding failure. It examines not only how to do things right, but how to do them wrong, so you can recognize and correct problems early…

If that whets your appetite, note that you can read the entire book online, by following the links above. Alternatively, check out the reviews on Amazon.com.

(*) Footnote: The June 1998 “Psion Software General” discussion contained a link that is, sadly, now dead. I’d like to belatedly thank Symbian lead Software Engineer Joe Branton for coming to my room, on several occasions, and encouraging me to pay more attention to the ideas in The Cathedral and the Bazaar.

3 September 2008

Restrictions on the suitability of open source?

Filed under: Open Source, security, usability — David Wood @ 8:56 am


Are there restrictions on the suitability of open source methods? Are there kinds of software for which closed source development methods are inherently preferable and inherently more likely to succeed?

These questions are part of a recent discussion triggered by Nokia’s Ari Jaaksi’s posting “Different ways and paradigms” that looked for reasons why various open source software development methods might be applicable to some kinds of project, but not to others. As Ari asks,

“Why would somebody choose a specific box [set of methods] for their products?”

One respondent suggested that software with high security and high quality criteria should be developed using closed source methods rather than using open source.

Another stated that,

I firmly believe ‘closed’ source is best route for targeting consumers and gaining mass appeal/ acceptance.

That brings me back to the question I started with. Are there features of product development – perhaps involving security and robustness, or perhaps involving the kinds of usability that are important to mainstream consumers – to which open source methods aren’t suited?

Before answering that, I have a quick aside. I don’t believe that open source is ever a kind of magic dust that can transform a failing project into a successful project. Adopting open source, by itself, is never a guarantee of success. As Karl Fogel says in the very first sentence of Chapter 1 in his very fine book “Producing open source software: how to run a successful free software project“,

“Most free projects fail.”

Instead, you need to have other project fundamentals right, before open source is likely to work for you. (And as an aside to an aside, I believe that several of the current attempts to create mobile phone software systems using open source methods will fail.)

But the situation I’m talking about is when other project fundamentals are right. In that case, my question becomes:

Are there types of software for which an open source approach will be at odds with the other software disciplines and skills (eg security, robustness, usability…) that are required for success in that arena.

In one way, the answer is trivial. The example of Firefox resolves the debate (at least for some parameters). Firefox shows that open source methods can produce software that scores well on security, robustness, and usability.

But might Firefox be a kind of unusual exception – or (as one of the anonymous respondents to Ari Jaaksi’s blog put it) “an outlier?” Alternatively – as I myself believe – is Firefox an example of a new trend, rather than an irrelevant outlier to a more persistent trend?

Regarding usability, it’s undeniable that open source software methods grew up in environments in which developers didn’t put a high priority on ease-of-use by consumers. These developers were generally writing software for techies and other developers. So lots of open source software has indeed scored relatively poorly, historically, on usability.

But history needn’t determine the future. I’m impressed by the analysis in the fine paper “Usability and Open Source Software” by David M. Nichols and Michael B. Twidale. Here’s the abstract:

Open source communities have successfully developed many pieces of software although most computer users only use proprietary applications. The usability of open source software is often regarded as one reason for this limited distribution. In this paper we review the existing evidence of the usability of open source software and discuss how the characteristics of open-source development influence usability. We describe how existing human-computer interaction techniques can be used to leverage distributed networked communities, of developers and users, to address issues of usability.

Another very interesting paper, in similar vein, is “Why Free Software has poor usability, and how to improve it” by Matthew Paul Thomas. This paper lists no less than 15 features of open source culture which tend to adversely impact the usability of software created by that culture:

  1. Weak incentives for usability
  2. Few good designers
  3. Design suggestions often aren’t invited or welcomed
  4. Usability is hard to measure
  5. Coding before design
  6. Too many cooks
  7. Chasing tail-lights
  8. Scratching their own itch
  9. Leaving little things broken
  10. Placating people with options
  11. Fifteen pixels of fame
  12. Design is high-bandwidth, the Net is low-bandwidth
  13. Release early, release often, get stuck
  14. Mediocrity through modularity
  15. Gated development communities.

As Paul says, “That’s a long list of problems, but I think they’re all solvable”. I agree. The solutions Paul gives in his article are good starting points (and are already being adopted in some projects). In any case, many of the same problems impact closed-source development too.

In short, once usability issues are sufficiently understood by a group of developers (whether they are adopting open source or closed source methods), there’s no inherent reason why the software they create has to embody poor usability.

So much for usability. How about security? Here the situation may be a little more complex. The online book chapter “Is Open Source Good for Security?” by David Wheeler is one good starting point. Here’s the final sentence in that chapter:

…the effect on security of open source software is still a major debate in the security community, though a large number of prominent experts believe that it has great potential to be more secure

The complication is that, if you start out with software that is closed source, and then make it open source, you might get the worst of both worlds. Incidentally, that’s one reason why the source code in the Symbian Platform isn’t being open-sourced in its entirety, overnight, on the formation (subject to regulatory approval) of the Symbian Foundation. It will take some time (and the exercise of a lot of deep skill), before we can be sure we’re going to get the best of both worlds, rather than the worst of both worlds.

31 August 2008

Intellectual property and open source

Filed under: books, GPL, Intellectual property, Open Source — David Wood @ 7:17 pm

I’ve just finished reading a third book, within two months, on the topic of open source licensing. The three books are:

  1. Heather Meeker’s “The Open Source Alternative: Understanding Risks and Leveraging Opportunities” – which I reviewed here;
  2. Lawrence Rosen’s “Open Source Licensing: software freedom and intellectual property law” – which I reviewed here;
  3. Van Lindberg’s “Intellectual property and open source: a practical guide to protecting code“.

My headline summary is that all three books are well worth reading. They overlap to an extent, but they come at their shared subject from very different viewpoints, so each book has lots of good material that you won’t find in the others.

Van Lindberg targets his book at software engineers. He uses many analogies between legal concepts and deeply technical software engineering concepts. For example (to give a flavour of many of the clever pieces of writing in the book):

“One way to think about private goods is to analogize them to locks or mutexes in a multithreaded program. A number of different threads may want to use a protected resource, but control of the lock around the resource is rivalrous…”

Somewhat unexpectedly, the first half of the book hardly mentions open source. There’s good reason for this. The first seven chapters of the book cover the basic principles of intellectual property (IP), including patents, copyrights, trademarks, trade secrets, licences, and contracts. I found the very first chapter to be particularly engrossing, as it set out the philosophical foundations for IP. Van Lindberg highlighted the utilitarian justification for IP, in terms of legal measures to counter what would otherwise be two sorts of market failures:

  • The cost of creating knowledge is high, but the cost of consuming it is low…. Therefore there is a societal incentive to not create as much knowledge as we would ideally like to have” (hence the utilitarian rationale for copyright)
  • Secrets are more valuable to you personally, but shared knowledge is more valuable to society…. The resource is valuable to you because you have a key, but it is worthless to everyone else” (hence the utilitarian rationale for patents).

As I said, the very first chapter was particularly engrossing, but I thought the other early chapters dragged a bit. Although all the material was interesting, there were rather too many details for my liking.

Chapter eight (“The economic and legal foundations of open source software”) went back to philosophical principles, in an attempt to pinpoint what makes open source different from proprietary software. The difference, according to Van Lindberg, is that:

  • Proprietary software is driven by corporate business goals (which inevitably involve profit-maximisation, and therefore – he claimed – a tension between what’s best for the customers and what’s best for the shareholders)
  • Open source software is driven by cooperative goals, in which the goals of the customers have primacy. (Note the difference between the similar-looking words corporate and cooperative.)

This chapter also runs a pretty compelling extended comparison between proprietary software and open source software, on the one hand, and banks and credit unions, on the other hand. Again, the first member of each pair is driven by shareholder goals, whereas the second member of each pair is driven by customer goals (the legal owners are the same people as the customers).

The primary task of open source licences, according to this analysis, is to support cooperation. In more detail, Van Lindberg says that open source licences are intended to solve the “Programmer’s Dilemma” version of the famous and well-known “Prisoner’s Dilemma” problem from game theory:

“Open source licences serve two functions in a game-theoretic context. First, they allow programmers to signal their cooperative intentions to each other. By placing their code under a licence that allows cooperation, programmers indicate to their peers that they are willing to participate in a cooperative solution. Second… licences are based in copyright law, which allows the original developer to dictate (to some extent) the users and uses of his code. The legal penalties associated with copyright violations change the decision matrix for other programmers, leading to a stable cooperative (and optimal) solution.”

This (like everything else in the book) is thought-provoking. But I’m not fully convinced. I think this puts too much importance onto the licence aspect of open source. Yes, picking a good licence is important – but it’s insufficient to guarantee the kind of cooperative behaviour that will make an open source project a real success. And as I’ve argued elsewhere, picking the right licence is no guarantee against the software fragmenting. But despite this quibble, I still think the ideas in this chapter deserve wide readership.

The second half of the book changes gear. With the first eight chapters having carefully outlined the underlying legal framework, the remaining six chapters walk through the kind of real-life IP concerns that will face someone (whether an individual developer, or a company) who wants to become involved in an open source project:

  • Issues with standard employment contracts that probably specify that everything you work on – even in your spare time – belongs to your company, and which you therefore are not free to assign to an open source project
  • General guidelines on choosing between some of the more popular open source licences
  • Legal complications over how to accept patches and other contributions, from outsiders, into your project
  • Particular issues with the GPL
  • Reverse engineering
  • Creating a non-profit organisation or foundation (recommended if your project becomes larger).

There’s lots of good advice here. Every chapter of this part of the book has important material – but I was slightly disappointed with some parts. For example, given the careful attention to patents in the first half of the book (where two chapters were devoted to this topic), I was expecting more analysis of how some of the major open source licences differ in their approach to patent licences and patent retaliation clauses. On reflection, that’s something that the other two books (ie by Meeker and Rosen) handle better.

The chapter on the issues with the GPL confirmed and extended the opinion about that licence which I’d picked up from my previous reading: the interpretation of the GPL is subject to great uncertainty over ambiguities. The chapter includes a lengthy “Questions and answers” section, to which the answer to nearly every question is “Maybe” or “It depends”. (Apart from the last question, which is “Can I depend on the answers in this Q&A to keep me out of trouble?”; the answer to this is “No, this is our best understanding of copyright law as it stands right now, but it could change tomorrow – and nobody really knows…”)

Giving more evidence for this view of the ambiguities surrounding the GPL, Van Lindberg mentions an essay by Matt Asay, “A Funny Thing Happened on the Way to the Market“. Here’s an extract from that essay:

“I asked two prominent representatives of the Free Software Foundation – Eben Moglen, general counsel, and Richard Stallman, founder – to clarify thorny issues of linkage to GPL code, and came up with two divergent opinions on derivative works in specific contexts…”

“…it is telling how widely their responses diverge – there appear to be no definitive answers to the question of what constitutes a derivative work under the GPL, not even from the holders of the licenses in question.”

This looks decisive, but it could be argued that this quote from Matt Asay is itself misleading, since Matt’s article goes on to state that:

“Fortunately, as I will detail below, this issue has largely gone away, as it has become accepted practice to dynamically link to GPL code [without that code becoming part of the GPL program]. Linus Torvalds helped to build momentum for such a reading of the GPL. While some argue that kernel modules, including device drivers, must be GPL, Torvalds has stated: This [GPL] copyright does *not* cover user programs that use kernel services by normal system calls – this is merely considered normal use of the kernel, and does *not* fall under the heading of ‘derived work.’

However, Van Lindberg seems to be right that the official FAQ about the GPL, maintained by the Free Software Foundation, advocates a stricter interpretation:

“Q: Can I release a non-free program that’s designed to load a GPL-covered plug-in?

“A: It depends on how the program invokes its plug-ins. For instance, if the program uses only simple fork and exec to invoke and communicate with plug-ins, then the plug-ins are separate programs, so the license of the plug-in makes no requirements about the main program.

If the program dynamically links plug-ins, and they make function calls to each other and share data structures, we believe they form a single program, which must be treated as an extension of both the main program and the plug-ins. In order to use the GPL-covered plug-ins, the main program must be released under the GPL or a GPL-compatible free software license, and that the terms of the GPL must be followed when the main program is distributed for use with these plug-ins.

“If the program dynamically links plug-ins, but the communication between them is limited to invoking the ‘main’ function of the plug-in with some options and waiting for it to return, that is a borderline case.

Using shared memory to communicate with complex data structures is pretty much equivalent to dynamic linking.”

Do these ambiguities over the GPL really matter? It’s hard to be sure, but I’m personally glad that the Symbian Foundation plans to adopt a licence – the EPL – which avoids these issues.

I’m also glad to have taken the time to read this book – it’s helped my understanding grow, in many ways.

Footnote: My thanks go to Moore Nebraska for drawing my attention to the Van Lindberg book.

13 August 2008

There’s more to Open Innovation than Open Source

Here’s the challenge: How best to capitalise on the potential innovation that could in theory be created by users and developers who are based outside of the companies that are centrally responsible for a product platform?

This is the question of how best to make Open Innovation work. Recall the following contrasts between Open Innovation and so-called Closed Innovation – taken from the pioneering book by Henry Chesbrough, “Open innovation: the new imperative for creating and profiting from technology”:

The “closed innovation” mindset:

  1. The smart people in our field work for us
  2. To profit from R&D we must discover it, develop it, and ship it ourselves
  3. If we discover it ourselves, we will get to the market first
  4. The company that gets an innovation to market first will win
  5. If we create the most and the best ideas in the industry, we will win
  6. We should control our IP, so that our competitors don’t profit from our ideas.

The “open innovation” mindset:

  1. Not all the smart people work for us. We need to work with smart people inside and outside our company
  2. External R&D can create significant value; internal R&D is needed to claim some portion of that value
  3. We don’t have to originate the research to profit from it
  4. Building a better business model is better than getting to market first
  5. If we make the best use of internal and external ideas, we will win
  6. We should profit from others’ use of our IP, and we should buy others’ IP whenever it advances our own business model.

In the modern world of hyper-complex products, easy communication via the Internet and other network systems, and the “Web 2.0” pro-collaboration zeitgeist, it is easy to understand why the idea of Open Innovation receives a lot of support. The challenge, as I said, is how to put these ideas into practice.

It’s tempting to answer that the principal key to successful Open Innovation is Open Source. After all, Open Source removes both financial and contractual barriers that would otherwise prevent many users and external developers from experimenting with the system. (What’s more, “Open Innovation” and “Open Source” share the prefix “Open”!)

However, in my view, there’s a lot more to successful Open Innovation than putting the underlying software platform into Open Source.

To see this, it’s useful to review some ideas from the handy summary presentation by leading Open Innovation researcher Joel West, “Managing Open Innovation through online communities”. Joel makes it clear that there are three keys to making Open Innovation work best for a firm (or platform):

  1. Maximising returns to internal innovation
  2. Incorporating external innovation in the [platform]
  3. Motivating a supply of external innovations.

Let’s dig more deeply into the second and third of these keys.

Incorporating external innovation in the platform

The challenge here isn’t just to stimulate external innovation. It is to be able to incorporate this innovation into the platform. That requires the platform itself to be both sufficiently flexible and sufficiently stable. Otherwise the innovation will fragment the platform, or degrade its ongoing evolution.

It also requires the existence of significant skills in platform integration. Innovations offered by users or external developers may well need to be re-engineered if they are to be incorporated in the platform in ways that meet the needs of the user community as a whole, rather than just the needs of the particular users who came up with the innovation in question.

  • This can be summarised by saying that a platform needs skills and readiness for software management, if it is to be able to productively incorporate external innovation.

Motivating a supply of external innovations

The challenge here isn’t just to respond to external innovations when they arise. It is to give users and external developers sufficient motivation to work on their ideas for product improvement. These parties need to be encouraged to apply both inspiration and perspiration.

  • Just as the answer to the previous issue is software management, the answer to this issue is ecosystem management.

But neither software management nor ecosystem management comes easy. Neither fall out of the sky, ready for action, just by virtue of a platform being Open Source. Nor can these skills be acquired overnight, by spending lots of money, or hiring lots of intrinsically smart people.

Ecosystem management involves a mix of education and evangelism. It also requires active listening, and a willingness by the platform providers to occasionally tweak the underlying platform, in order to facilitate important innovations under consideration by external parties. Finally it requires ensuring that third parties can receive suitable rewards for their breakthroughs – whether moral, social, or financial.

Conclusion: On account of a legacy of more than ten years of trial and error in building and enhancing both a mobile platform and an associated dynamic ecosystem, the Symbian Foundation will come into existence with huge amounts of battle-hardened expertise in both software management and ecosystem management. On that basis, I expect the additional benefits of Open Source will catalyse a dramatic surge of additional Open Innovation around the Symbian Platform. In contrast, other mobile platforms that lack this depth of experience are likely to find that Open Source brings them grief as much as it brings them potential new innovations.

27 July 2008

Understanding Open Source Licensing

Filed under: CPL, EPL, GPL, Open Source, OSiM, OSL — David Wood @ 8:18 pm

“What’s the best book to read for an introduction to Open Source?”

I’ve already given one set of answers to this question, in my article, “Clear thinking about open source“. One reply to that article – from Joel West, a writer and researcher on Open Innovation and Open Source whose advice I value – urged me to include one more book in my reading list: Lawrence Rosen’s “Open Source Licensing: software freedom and intellectual property law“. This weekend I’ve finished reading it. And indeed, I do now endorse it as being clearly written yet also highly insightful.

Initally, I tended to shy away from this book, instead preferring the book by Heather Meeker that I covered in my earlier article. Both books focus on open source licensing issues, but Meeker’s was published this year, whereas Rosen’s dates from 2004. So Rosen’s book makes no mention of GPL v3, or Sun’s experience with open-sourcing Java, or even the Eclipse Public License (EPL) which the Symbian Foundation is likely to adopt. That makes Rosen’s book appear out of date. However, I realised that one license which the book does cover (comprehensively) is the Common Public License (CPL) which is the precursor of the EPL and which differs from the EPL in very few places. Reassured, I dipped into the book – and then could hardly put it down.

In summary, I now recommend both the Meeker book and the Rosen book for their coverage of open source licensing. They complement each other nicely. There’s a bit of overlap, but also lots of good material in each book that you won’t find in the other.

Specifically, here are a few of the “aha”s or other learnings I took away from Rosen’s book:

1.) The ten principles of the Open Source Definition are actually quite hard to understand in places (this comment came as a relief to me, since I had been thinking the same thing).

2.) Patents and Copyrights should be approached as parallel sets of legal principle – the former applicable to ideas, and the latter to expressions of ideas. That’s a far better approach than initially just thinking about Copyrights, and then trying to squeeze in considerations about Patents at the end.

3.) One of the key differences between different open source licenses is in the treatment of patent licenses – and in the different circumstances in which patent licenses (and/or copyright licenses) can be withdrawn in the wake of various kinds of patent infringement suits. There’s a tricky balance that has to be drawn between the needs of both licensor and licensee concerning the continuing value of their respective patent portfolios.

4.) One piece of license evolution covered in the book – the difference between v2.0 and v2.1 of the Open Software License (OSL) – closely mirrors the principal difference between the CPL and the EPL: it’s a reduction in the circumstances in which a patent license can be withdrawn when a licensee brings a separate patent infringement case against the licensor.

5.) The insistence in GPL v2 about not being compatible with other licenses that introduce additional restrictions (even restrictions that the initial drafters of GPL v2 had not considered), is a real drawback of that license, since it unnecessarily hinders aggregation of code written under similar but different licenses. (Possible restrictions that have emerged more recently include provisions for defence against patent infringement lawsuits or to protect the licensor’s trademarks.)

6.) “… sections of the LGPL are an inpenetrable maze of technological babble. They should not be in a general purpose software license.” (page 124)

7.) Disclaimers of liability that are generally written into open source licenses may be overridden by general consumer legislation. Recognising this, the CPL (and hence the EPL) introduces a clause that allocates particular responsibility to “commercial contributors” to defend and indemnify all other contributors against losses, damages, or costs.

8.) One possible way for a company to make money from software is via the mechanism Rosen calls “Eventual Source”: code is released as open source after some delay period, but recipients can elect to pay an early access license fee to be able to work with the code (under a non-open source license) ahead of its release as open source.

I’ve still got lots of questions about open source licensing (for example, about the prospects for wider adoption of GPL v3, and about how successful Rosen’s own preferred OSL is likely to be in the longer run). I’ll be attending the Open Source in Mobile conference in Berlin in September, when I hope to find out more answers! (And no doubt there will be new questions too…)

8 July 2008

Taming the security risks of going open source

Filed under: descriptors, Open Source, security — David Wood @ 5:05 pm

The Wireless Informatics Forum asks (here and here),

Will an open source model expose Symbian’s security flaws?

I wonder what security implications are being presented to Symbian? In the computing world there’s plenty of debate about the impact of opening up previously proprietary code. The primary concern being that an open source model exposes code not only to benevolent practitioners but also to malevolent attackers…

With much of the mobile industry steering towards m-commerce initiatives, potential security risks must be considered…

How much of the legacy Symbian code will be scrapped and built from scratch according to open source best practice?

First, I agree with the cardinal importance of security, and share the interest in providing rock solid enablers for m-commerce initiatives.

But I’m reasonably optimistic that the Symbian codebase is broadly in a good state, and won’t need significant re-writes. That’s for three reasons:

  1. Security is something that gets emphasised all the time to Symbian OS developers. The whole descriptor system for handling text buffers was motivated, in part, by a desire to avoid buffer overrun errors – see my May 2006 article “The keystone of security“.
  2. Also, every now and then, Symbian engineers have carried out intense projects to review the codebase, searching high and low for lurking defects.
  3. Finally, Symbian OS code has been available for people from many companies to look at for many years – these are people with CustKit or DevKit licenses. So we’ve already had at least some of the benefits of an open source mode of operation.

On the other hand, there’s going to be an awful lot of code in the overall Symbian Foundation Platform – maybe 30+ million LOC. And that code comes from many different sources, and was written under different cultures and with different processes. For that reason, we’ve said it could be up to two years before the entire codebase is released as Open Source. (As my colleague John Forsysth explains, in the section entitled “Why not open source on day 1?”, there are other reasons for wanting to take time over this whole process.) Of course we’d like to go faster, but we don’t at this stage want to over-promise.

So to answer the question, I expect the lion’s share of the Symbian codebase to stay in place during the migration, no doubt with some tweaks made here and there. Time will tell how much of the peripheral pieces of code need to be re-written.

6 July 2008

Clear thinking about open source

Filed under: GPL, Open Source — David Wood @ 9:12 am

What’s the best book to read for an introduction to Open Source?” That’s a question I’ve been asked several times in the last fortnight – as many of my colleagues in and around Symbian have realised that Open Source is a more complex and more intriguing subject than they first thought. (Of course, the announcements of 24 June have had something to do with this increased interest level.)

I’m still not sure how to answer that question. Over the years, I’ve read lots of books about Open Source – but with the passage of time, I’ve forgotten what I’ve learnt from each book.

Two books that stick out in my mind, through the veil of intervening years, as particularly enjoyable are:

Of these, the latter stands out as an especially easy and engrossing read. (It also happens to be the first serious book read independently by all three members of my immediate family – my wife, my son, and myself.) But when I pulled these two books from my bookshelf the other day and checked their inside cover, where I usually record the date when I purchase a book, I realised I had read them both as long ago as 2001. And Open Source has moved on a lot since that time. So while both these books are great sources of historical insight, readers will need to turn elsewhere for more up-to-date info.

A more recent book I remember making a big impact on my thinking at the time (2005, according to the inside cover) was:

Flicking through that book again just now, I see so many interesting snippets in it that I’m tempted to try to squeeze it back into my already hopelessly overfull reading in-box, for a second-time-round read. But even a 2005 book is dated.

That brings me to the book I’ve just finished reading:

Heather Meeker is Co-Managing Shareholder at the East Palo Alto law firm Greenberg Traurig. I first saw Heather speak at the Olswang “Open Source Summit” in London in November 2007. I was impressed at the time by the clarity of her grasp of the legal issues surrounding Open Source. Heather’s book has the same fine qualities:

  • It’s primarily exposition (education) rather than advocacy (evangelism)
  • I had many “of course!” and “aha!” moments while reading it
  • There are some particularly clear diagrams
  • Crucially, the language is easy to read
  • Also crucially, the book is comfortable both with legal matters and with technical matters (eg aspects of C and C++).

So I would say, this is the book to read, for a good account of the legal aspects surrounding open source.

One part that really shines comes about three quarters of the way through the book. It’s by far the best analysis I’ve read of “The border dispute of GPL2″. The question in the minds of many commercially-driven companies, of course, is whether they risk having to publish the source code of any of their own software that happens to interact with code (such as the Linux kernel) released under GPL. The book makes it strikingly clear that the commercial risks aren’t just because the original drafters of the GPL are philosophically opposed to closed source software. They’re also because of some deep-rooted ambiguities inside the license itself. To quote from page 188:

This is why attorneys who read the GPL quickly come to the conclusion that this phrase – upon which entire companies and development projects depend – is irretrievably vague.

And again from the footnote to page 189:

To provide context for nonlawyer readers, drafting unique (in the document) and unambiguous definitions is considered a baseline lawyering skill in transactional practice. Doing otherwise is generally a sign that the drafter is not a lawyer or, more precisely, does not have baseline drafting skills. If this seems harsh, consider that many programming languages require one, and only one, definition of a user-defined variable. (Some languages allow multiple definitions, or “overloading”, but using this feature requires intimate knowledge of the rules used by the compiler or interpreter to resolve them.) Failing to understand these rules properly creates bugs. So, in a sense, multiple or conflicting definitions [such as occur in the GPL] in a legal document, without express rules to resolve them, is a “bug” in drafting.

I can well imagine senior managers in mobile phone companies getting more and more worried as they read this book, finding more and more reasons, chapter by chapter (not just the chapter on the Border Dispute), to fear eventual legal cases against them, if they have code of their own in a phone that interacts with a GPL kernel.

Perhaps inevitably, the book has less to say about the EPL – which is the license to be used by the Symbian Foundation. After all, GPL is (the book suggests) the “most widely used license on the planet”. But the EPL has many fewer ambiguities, and is significantly more business-friendly.

Does v3 of GPL change matters? Not really. First, as the final chapters of the book make clear, many of the deep-rooted ambiguities remain, despite the massive (and impressive) work done by the drafting team for v3. Second, Linux is likely to remain on v2 GPL for the foreseeable future.

29 June 2008

The five laws of fragmentation

Filed under: fragmentation, leadership, Open Source, Symbian Foundation — David Wood @ 9:42 am

As discussion of the potential for the Symbian Foundation gradually heats up, the topic of potential fragmentation of codelines keeps being raised. To try to advance that discussion, I offer five laws of fragmentation:

1. Fragmentation can have very bad consequences

Fragmentation means there’s more than one active version of a software system, and that add-on or plug-in software which works fine on one of these versions fails to work well on other versions. The bad consequences are the extra delays this causes to development projects.

Symbian saw this with the divergence between our v7.0 and v7.0s releases. (The little ‘s’ was sometimes said to stand for “special”, sometimes for “strategic”, and sometimes for “Series 60″.) UIQ phones at the time were based on our v7.0 release. However, the earliest Series 60 devices (such as the Nokia 7650 “Calypso”) had involved considerable custom modifications to the lower levels of the previous Symbian OS release, v6.1, and these turned out to be incompatible with our v7.0. As a pragmatic measure, v7.0s was created, that had all of the new technology features introduced for v7.0, but which kept application-level compatibility with v6.1.

On the one hand, v7.0s was a stunning success: it powered the Nokia 6600 “Calimero” which was by far the largest selling Symbian OS phone to that time. On the other hand, the incompatibilities between v7.0 and v7.0s caused no end of difficulties to developers of add-on or plug-in software for the phones based on these two versions:

  • The incompatibilities weren’t just at the level of UI – UIQ vs. Series 60
  • There were also incompatibilities at many lower levels of the software plumbing – including substantial differences in implementation of the “TSY” system for telephony plug-ins
  • There were even differences in the development tools that had to be used.

As a result, integration projects for new phones based on each of these releases ran into many delays and difficulties.

Symbian OS v8 was therefore designed as the “unification release”, seeking as much compatibility as possible with both of the previous branches of codeline. It made things considerably better – but some incompatibilities still remained.

As another example, I could write about the distress caused to the Symbian partner ecosystem by the big change in APIs moving from v8 to v9 (changes due mainly to the new PlatSec system for platform security). More than one very senior manager inside our customer companies subsequently urged us in very blunt language, “Don’t f****** break compatibility like that ever again!”

Looking outside the Symbian world, I note the following similar (but more polite) observation in the recent Wall Street Journal article, “Google’s Mobile-Handset Plans Are Slowed“:

Others developers cite hassles of creating programs while Android is still being completed [that is, while it is undergoing change]. One is Louis Gump, vice president of mobile for Weather Channel Interactive, which has built an Android-based mobile weather application. Overall, he says, he has been impressed by the Google software, which has enabled his company to build features such as the ability to look up the weather in a particular neighborhood.

But he says Weather Channel has had to “rewrite a few things” so far, and Google’s most recent revision of Android “is going to require some significant work,” he says.

2. Open Source makes fragmentation easier

If rule 1 was obvious (even though some open source over-enthusiasts seem to be a bit blind to it), rule 2 should be even clearer. Access to the source code for a system (along with the ability to rebuild the system) makes it easier for people to change that software system, in order to smooth their own development purposes. If the platform doesn’t meet a particular requirement of a product that is being built from that platform, hey, you can roll up your sleeves and change the platform. So the trunk platform stays on v2.0 (say) while your branch effectively defines a new version v2.0s (say). That’s one of the beauties of open source. But it can also be the prelude to fragmentation and all the pain which will ensue.

The interesting question about open source is to figure out the circumstances in which fragmentation (also known as “forking”) occurs, and when it doesn’t.

3. Fragmentation can’t be avoided simply by picking the right contract

Various license contracts for open source software specify circumstances in which changes made by users of an open source platform need to be supplied back into the platform. Different contracts specify different conditions, and this can provoke lengthy discussions. However, for the moment, I want to sidestep these discussions and point out that contractual obligations, by themselves, cannot cure all fragmentation tendencies:

  • Even when users of a platform are obligated to return their changes to the platform, and do so, it’s no guarantee that the platform maintainers will adopt these changes
  • The platform maintainers may dislike the changes made by a particular user, and reject them
  • Although a set of changes may make good sense for one set of users, they may involve compromises or optimisations that would be unacceptable to other users of the platform
  • Reasons for divergence might include use of different hardware, running on different networks, the need to support specific add-on software, and so on.

4. The best guarantee against platform fragmentation is powerful platform leadership

Platform fragmentation has some similarities with broader examples of fragmentation. What makes some groups of people pull together for productive collaboration, whereas in other groups, people diverge following their own individual agendas? All societies need both cooperation and competition, but when does the balance tilt too far towards competition?

A portion of the answer is the culture of the society – as reflected in part in its legal framework. But another big portion of the answer is in the quality of the leadership shown in a society. Do people in the group believe that the leaders of the group can be relied on, to keep on “doing the right thing”? Or are the leaders seen as potentially misguided or incompetent?

Turning back to software, users of a platform will be likely to stick with the platform (rather than forking it in any significant way) if they have confidence that the people maintaining the trunk of the platform are:

  1. well-motivated, for the sake of the ecosystem as a whole
  2. competent at quickly and regularly making valuable new high quality releases that (again) meet the needs of the ecosystem as a whole.

Both the “character” (point 1) and the “competence” (point 2) are important here. As Stephen Covey (both father and son) have repeatedly emphasised, you can’t get good trust without having both good character and good competence.

5. The less mature the platform the more likely it will be to fragment, especially if there’s a diverse customer base

If a platform is undergoing significant change, users can reason that it’s unlikely to coalese any time soon into a viable new release, and they’ll be more inclined to carry on working with their own side version of the platform, rather than waiting for what could be a long time for the evolving trunk of the platform to meet their own particular needs.

This tendency is increased if there are diverse customers, who each have their own differing expectations and demands for the still-immature software platform.

In contrast, if the core of the platform is rock-solid, and changes are being carefully controlled to well-defined areas within the platform, customers will be more likely to want to align their changes with the platform, rather than working independently. Customers will reason that:

  • The platform is likely to issue a series of valuable updates, over the months and years ahead
  • If I diverge from the platform, it will probably be hard, later on, to merge the new platform release material into my own fork
  • That is, if I diverge from the platform, I may gain short-term benefit, but then I’ll likely miss out on all the good innovation that subsequent platform releases will contain
  • So I’d better work closely with the developers of the trunk of the platform, rather than allowing my team to diverge from it.

Footnote: Personally I see the Symbian Foundation codeline to be considerably more mature (tried and tested in numerous successful smartphones) than the codeline in any roughly similar mobile phone oriented Linux-based foundation. That’s why I expect that the Symbian Foundation codeline will fall under less fragmentation pressure. I also believe that Symbian’s well-established software development processes (such as careful roadmap management, compatibility management, system architecture review, modular design, overnight builds, peer reviews, and systematic and extensive regression testing) are set to transfer smoothly into this new and exciting world, maintaining our track record of predictable high-quality releases – further lessening the risks of fragmentation.

24 June 2008

Symbian 2-0

Filed under: Nokia, Open Source, Symbian Foundation — David Wood @ 6:13 am

Months of planning culminated this morning with the announcement of an intended dramatic evolution for Symbian – an evolution that should decisively advance the Symbian platform toward its long-anticipated status of being the most widely used software platform on the planet.

The announcement of the Symbian Foundation comes on the very first day of the second decade of Symbian’s existence. It also sets the scene for a much wider participation by companies and individuals in the development and deployment of novel services and applications for all sorts of new and improved Symbian-powered mobile devices. Because this second decade of Symbian’s history should witness radically greater collaboration than before, the designation “Symbian 2.0” seems doubly apt.

Subject to the deal receiving regulatory approval, I envision a whole series of far-reaching changes to take place in the months and years ahead:

  • It will become possible for the best practices of Open Source Software to be applied in and around the software platform that is the most suited to smartphones
  • Closer working relations between personnel from Symbian and S60 teams will result in more efficient development, accelerating the rate at which the overall platform improves
  • The lower barriers to usage of the Symbian platform should mean that the number of customers and partners will rocket
  • The unification of the formerly separate UI systems will further increase the attractiveness of the new platform
  • The platform will be royalty free – which will be another factor to boost usage
  • Because of increased adoption of the platform, the ecosystem will also grow, through the OS-ES volume-value virtuous cycle mechanism
  • For all these reasons, smartphone innovation should jump forward in pace, to the potential benefit of all participants in the ever expanding, ever richer, converged mobile industry
    Customers and partners alike – both newcomers and old-timers – will be on the lookout for fresh options for differentiation and support
  • In short, there will be lots of new opportunities for people with knowledge of the Symbian platform.

Great credit is due to Symbian’s shareholders, and especially to Nokia, for enabling and driving this bold and powerful initiative.

Of course, with such a large change, there’s considerable uncertainty about how everything will work out. Many people will be unsure exactly where they, personally, will end up in this new world. Lots of details remain to be decided. But the basic direction is clear: participants in the Symbian 2.0 ecosystem will be part of a much bigger game than before. It’s going to be exciting – and no doubt somewhat scary too. As Symbian’s first CEO, Colly Myers, used to say, “Let’s rock and roll!”

Postscript: For a clear rationale of some key aspects of the Symbian Foundation plan, take a look at what my Symbian colleague John Forsyth has to say, here.

21 June 2008

Open minds about open source

Filed under: Open Source — David Wood @ 3:55 pm

There’s been a surprising amount of heat (not to mention vitriol) in the responses to recent blog postings from Ari Jaaksi of Nokia on the topic of the potential mutual benefits of a constructive encounter between Open Source developers and the companies who make money from mobile telephony.

Ari’s message (in “Some learning to do?“, and again in “Good comments from Bruce“) is that there’s a need for two-way learning, and for open minds. To me, that seems eminently sensible. This topic has so many angles (and is changing so quickly) that we shouldn’t expect anyone to have a complete set of answers in place. But quite a few online responses take a different stance, basically saying that there’s nothing for Open Source developers to learn – they know it all already – and that any movement must be on the side of the mobile phone business companies. The mountain will have to come to Mohammed.

At the same time as I’ve been watching that debate (with growing disbelief), I’ve been thumbing my way through the 500+ page book “Perspectives on Free and Open Source Software”. This book contains 24 chapters (all written by different authors), one introduction (by the joint editors of the book: Joseph Feller, Brian Fitzgerald, Scott Hissam, and Karim Lakhani), one foreword (by Michael Cusumano), and one epilogue (by Clay Shirky). The writers range in their attitudes toward Open Source, all the way from strong enthusiasm to considerable scepticism. They’ve all got interesting things to say. But they have several things in common (which sets them apart from the zealotry in the online blog responses):

  • An interest to find and then examine data and facts
  • A willingness to engage in dialog and debate
  • A belief that Open Source is now well established, and won’t be disappearing – but also a belief that this observation is only the beginning of the discussion, rather than the end.

Another thing I like about the book is the way the Introduction sets out a handy list of questions, which readers are asked to keep in their minds as they review the various chapters. This makes it clear, again, that there’s still a lot to be worked out, regarding the circumstances in which Open Source is a good solution to particular technical challenges.

It’s a bit unfair to try to summarise 500+ pages in just a few paragraphs, but the following short extracts give a good flavour in my view. From Michael Cusumano’s introduction:

Most of the evidence in this book suggests that Open Source methods and tools resemble what we see in the commercial sector and do not themselves result in higher quality. There is good, bad, and average software code in all software products. Not all Open Source programmers write neat, elegant software modules, and then carefully test as well as document their code. Moreover, how many “eyeballs” actually view an average piece of Open Source code? Not as many as Eric Raymond would have us believe.

After reading the diverse chapters in this book, I remain fascinated but still skeptical about how important Open Source will be in the long run and whether, as a movement, it is raising unwarranted excitement among users as well as entrepreneurs and investors…

The conclusion I reach … is that the software world is diverse as well as fascinating in its contrasts. Most likely, software users will continue to see a co-mingling of free, Open Source, and proprietary software products for as far as the eye can see. Open Source will force some software products companies to drop their prices or drop out of commercial viability, but other products and companies will appear. The business of selling software products will live on, along with free and Open Source programs.

And from Clay Shirky’s epilogue:

Open Source methods can create tremendous value, but those methods are not pixie dust to be sprinkled on random processes. Instead of assuming that Open Source methods are broadly applicable to the rest of the world, we can instead assume that they are narrowly applicable, but so valuable that it is worth transforming other kinds of work, in order to take advantage of the tools and techniques pioneered here.

If I have one complaint about the book, it is that it is already somewhat dated, despite having 2005 as its year of publication. Most of the articles appear to have been written a couple of years earlier than the publication date, and sometimes refer in turn to research done even before that. Five or six years is a long time in the fast-moving world of Open Source.

« Newer PostsOlder Posts »

The Silver is the New Black Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 118 other followers