dw2

28 January 2009

Package Owners contemplating the world ahead

Filed under: architecture, Nokia, packages, passion, Symbian Foundation — David Wood @ 3:13 pm

I’ve just spent two days at the very first Symbian Foundation “Package Owners workshop”, held in a Nokia training facility at Batvik, in the snow-covered countryside outside Helsinki. The workshop proved both thought-provoking and deeply encouraging.

In case the term “package owner” draws a blank with you, let me digress.

Over the last few years, there have been several important structural rearrangements of the Symbian OS software engineering units, to improve the delivery and the modularity of the operating system code. For example, we’ve tracked down and sought to eliminate instances where any one area of software relied on internal APIs from what ought to have been a separate area.

This kind of refactoring is an essential process for any large-scale fast-evolving software system – otherwise the code will become unmaintainable.

This modularisation process is being taken one stage further during the preparation for opening the sources of the entire Symbian Platform (consisting of Symbian OS plus UI code and associated applications and tools). The platform has been carefully analysed and divided up into a total of around 100 packages – where each package is a sizeable standalone software delivery. Each package will have its own source code repository.

(Packages are only one layer of the overall decomposition. Each package is made up of from 1 to n component collections, which are in turn made up of from 1 to n components. In total, there are around 2000 components in the platform. Going in the other direction, the packages are themselves grouped into 14 different technology domains, each with a dedicated “Technology Manager” employed by the Symbian Foundation to oversee their evolution. But these are stories for another day.)

Something important that’s happened in the last fortnight is that package owners have been identified for each of the packages. These package owners are all highly respected software engineers within their domain of expertise.

We’re still working on the fine detail of the description of the responsibilities of package owners, but here’s a broad summary:

  • Publish the roadmap for their package
  • Have technical ownership for the package
  • Be open to contributions to their package from the wider software community
  • Evalutate all contributions, and provide useful feedback to the contributors
  • Maintain a good architecture for the package
  • Act as feature sponsor in their package area
  • Manage package deliveries.

This is a huge task, so most package owners will rely on a network of approved committers and other supporters in order to carry out their role.

(Instead of “package owner”, the word “maintainer” is used with a similar meaning by some other open source projects.)

Over the next month, the nominated package owners (along with some of their line managers) are each attending one of three introductory workshops. Each workshop lasts two days. The goal of the workshop is to review and discuss how software development processes will alter, once the source code for the package is available to a much wider audience. Many processes will remain the same as before, but others will alter, and yet others will be brand new.

As I said, the first of these workshops has just finished. There were people from at least three different continents in attendance. I knew a handful before, but for many others, it was the first time for me to meet them. Without exception, they are singularly impressive individuals, with great CVs, and (in most cases) with glittering track records inside Nokia or Symbian.

Not surprisingly, the newly minted package owners brought a variety of different expectations to the event. Several already have considerable experience working with open source software. Others are, naturally, somewhat apprehensive about the changes.

A series of presenters covered matters such as:

  • An overview of the operation and architecture of the Symbian Foundation
  • Great software developers and open source principles
  • Tips on growing a successful community of external contributors
  • The importance of meritocracy
  • Tools and processes
  • IPR considerations, licensing issues, and legal aspects.

There were also small group breakout sessions on topics such as “What are the key challenges and issues facing package owners?” and “What are we going to do differently from before?”

What impressed me the most were the events on the first evening. After a dinner and optional sauna session, the participants gathered again in the seminar room, and spent another three hours reviewing ideas arising from the group breakout sessions from earlier in the day. The passion of the package owners stood out. In their own individual ways, they displayed a shared strong desire to explore new ways of engaging a wider community of software developers, without destabilising the mission-critical projects already being undertaken. These are all busy people, with multiple existing tasks, but they were ready to brainstorm ways to adopt new skills and processes in order to improve the development of their packages. (And I don’t think it was just the Lapin Kulta speaking.)

I half expected the fervour of the debate to die down after a while, but the buzz in the room seemed as strong at 10.50pm as at 8pm. There was a constant queue of people trying to get hold of the marker pen which had been designated (with limited success) as giving someone the right to speak to group. The workshop facilitator had to speak up forcefully to point out that the facilities would be locked shut in ten minutes.

With this kind of intelligence and fervour being brought to bear in support of the Symbian Foundation’s tasks, I’m looking forward to an exciting time ahead.

21 January 2009

2009 – end users modifying their mobile phone apps

Filed under: innovation, Open Source — David Wood @ 9:05 am

Here’s a scenario I expect to become increasingly common later this year.

(Elements in the following story are made up, of course, but they serve as placeholders for anticipated real people, real phones, and real apps.)

Vijaya is really fond of her new Nokia N225 based on the latest Symbian Platform Release, and is both intrigued and frustrated by features of the Jomo Player app that’s built into that phone. The app does some very clever things, but yet, Vijaya thinks it would serve her own needs better if some of the behaviour and functionality were changed. She also has ideas for tweaking the UI.

If this story were set in 2008, that would probably be the end of the story. Vijaya might write about her ideas on Facebook, and her friend Sunil might send them to someone he knows who has a job in the Nokia Devices R&D lab, but the chances are, the original developers of the Jomo Player app would be far too busy to pay attention to what appear to be idiosyncratic, quaint, or overly personalised change suggestions.

Now let’s make this story more interesting. Suppose that Vijaya already knows some Symbian C++. Maybe she took a course on it at the local technical university, which is enrolled into the Symbian Academy program. Or maybe she used to work for a phone manufacturer helping to customise their Symbian devices. So, either way, Vijaya starts writing an alternative Jomo Player app, starting from scratch. Her goal is to embody her own ideas on usability and feature set.

But guess what: her alternative Jomo Player falls far short of the performance and power of the built-in app. It’s tough to re-create a complex app. Although Symbian in 2008 is an open platform, with rich APIs, it’s not at all obvious to Vijaya how to emulate, in her version of the app, many of the features of the original, which she now comes to increasingly recognise as subtle and refined. Some of Vijaya’s friends band together to help, but they eventually abandon the project. The original app, they realise, is doing some incredibly complex things under the surface – and their attempted clone comes nowhere close to matching it. So, in 2008, that really is the end of the story.

Now let’s re-run this story sometime later on in 2009. The source code for the original Jomo Player app is available for download from the Symbian Foundation Mercurial code repository, under the open source Eclipse Public Licence. What’s more, the publicly available SDKs provide enough header files and libraries that Vijaya and her friends can rebuild the entire app. So the starting point is very different. Rather than struggling to create the whole app from scratch, Vijaya can fairly easily locate the parts of the source code she wants to change. As a result, she has a new version of the Jomo Player on her N225 in less than a week. As a result of using this app some more, with its altered features, she and her friends get yet more ideas – and then a major breakthrough flash. The new app quickly evolves into a dramatically better state.

Shortly afterwards, Vijaya makes her new app available via several application stores. It gets rave reviews. These reviews come to the attention of product managers in one or more phone companies. Both the N226 and a new Samsung phone build this version of the app into their ROMs, and reach millions of happy smartphone customers well before Christmas.

Vijaya started this whole process by scratching a personal itch. She wanted to improve a particular app running on her own phone. However, unexpectedly, she now has three different Symbian development houses competing to hire her into their teams.

In parallel, Mika has altered the Voton Reader app so that it’s more usable by his mother. (It turns out, afterwards, to be more usable by almost everyone!) Antony has added a whole series of shortcut keys to the Contacts app. And Alexa has produced a stunning new combination of two originally separate apps.

That’s the difference between what can be accomplished by an open platform (with published APIs) and by an open source platform (with published, buildable source code).

As 2009 progresses, the mobile phone platforms that publish their source code will increasingly play host to deeper and more interesting forms of innovation, than those mobile platforms which keep their source code closed. The phones from these open source mobile platforms (such as Symbian) will have the best Mojo Player, Voton Readers, and so on – not because the developers inside Symbian are cleverer than those in other mobile phone platform companies, but because these platforms can take greater advantage of the much wider pool of creative and clever people who are outside the company.

Footnote: Credit for key elements of this vision belong to some of my colleagues on the Symbian Foundation launch team, including William Roberts and Antony Edwards.

Disclaimer: The devil’s in the detail. Thoughtful readers will realise there are lots of important details missing from the above story. I look forward to returning to these details.

15 January 2009

Daring to twitter

Filed under: communications — David Wood @ 1:25 am

For some time, I’ve been holding off experimenting with Twitter.

First, because the name of the service still rankles with me.

Second, because I’m fearful that it will turn out to be a distraction.

On the other hand, I remember having similar apprehensions before starting to blog, and before registering on Facebook. These are two experiments that have turned out very positive for me. So I hope to have a similar positive experience with Twitter:

  • So I can understand better why so many people speak well of it
  • So that I can improve my communication network.

I tried to register the name “dw2-0” for myself on Twitter, but it seemed not to like the hyphen. I’ve ended up with the simpler Twitter name “dw2”.

14 January 2009

Robert Scoble and the fallacy of uniqueness

Filed under: innovation, Nokia, operators, Qt — David Wood @ 11:58 pm

I was surprised to see esteemed blogger Robert Scoble fall into a weird reality distortion field in his recent piece, “Smartphone competition: It’s too late for Nokia and Microsoft, but not too late for Palm in USA“.

Here’s the core of his argument:

…in the USA there are only these major carriers: AT&T, Verizon, Sprint, T-Mobile.

  • AT&T? Gone. Apple has them sewn up.
  • Verizon? RIM has them sewn up. I met with RIM’s director of marketing at CES and he was smiling. That should give you a hint.
  • Sprint? Palm has them in the Palm of their hands now.
  • T-Mobile? Google’s Android is their key smart phone.

So, what does this mean? All the US carriers now have their SmartPhone choices. All the trains have left the station.

Who is out in this game? Microsoft and Nokia.

This argument depends on the fallacious idea that each major network operator can be “sewn up” by just one provider of smartphones – that there will be one uniquely preferred smartphone platform per network operator – and that this choice is already set in stone.

It’s true that almost all major network operators, worldwide, have expressed a desire to reduce the number of smartphone platforms that they have to support. The reason for this reduction is to avoid lots of effort being duplicated across different platforms.

However:

  1. Most major network operators are aiming at a number of supported smartphone platforms that, while small, is greater than one;
  2. One reason for supporting more than one platform is to benefit from an important element of competition – this is particularly relevant while so many smartphone platforms are either relatively new, immature, or going through a significant transition;
  3. Another reason for supporting more than one platform is that end users on the network frequently want a choice;
  4. Even if a carrier decides not to actively support a given smartphone platform – in the sense of becoming involved in customising phones from that platform to take advantage of specific network features – they often allow phones from that platform to run on their network.

Scoble also dismisses the prospects for future Nokia products:

I’ve seen the new Nokia OS, just a month ago. They don’t have it.

This judgement seems highly premature to me. It also seems that AT&T, for one, maintain an interest in shipping Nokia phones. Witness, for example, the AT&T-branded user’s manual for the Nokia E71.

More fundamentally, there’s much more to the future of Nokia than just one initiative (“the new Nokia OS”). There’s a whole raft of new initiatives coming. Some will come to light through forthcoming releases of the Symbian Platform. Others will reach the market in many other ways.

Nokia’s announcement today of an additional licence for the Qt Platform, in order to strengthen developer interest and participation in that platform, is just one example. To quote Sebastian Nyström, Vice President, Qt Software, Nokia:

Broader use of Qt by even more leading companies will result in valuable feedback and increased contributions, ensuring that Qt remains the best-in-class, cross-platform UI and application framework. The accelerated development of Qt will allow developers, including Nokia, to deliver better devices and applications, reduce time to market and enable a wider deployment base for their solutions.

In short, there are plenty more trains to come.

8 January 2009

Symbian Foundation – Open for recruitment

Filed under: Symbian Foundation — David Wood @ 10:06 am

As announced on Jun 24th last year, the Symbian Foundation is expected to start operating during the first half of this year.

In the last few days, plans for the operation of the Symbian Foundation have taken another significant step forwards, with the creation of a recruitment microsite to help attract and identify the best possible people to staff the organisation.

The website describes roles in Technology, Marketing, and Operations. It includes a draft statement of Symbian Foundation values.

The website is still work in progress – some jobs have detailed specifications, whereas others are currently only listed by job title. At time of writing, the jobs listed are all UK-based, but there is mention of roles in San Francisco and Finland too.

I expect the recruitment team at Harvey Nash (who are running the site on behalf of the Symbian Foundation) are going to be busy!

1 January 2009

Out with the old, in with the new

Filed under: infrastructure, Open Source — David Wood @ 10:57 am

The final two hours of 2008 were, for me, the most fraught and tense of any New Years Eve that I remember. A central London car journey that was scheduled to take only 15 minutes became a 2 hour long trauma. We made it to the restaurant 90 minutes late, just one minute before the gongs of Big Ben would ring in 2009.

On paper, the plans for the evening looked clear enough. My mum is staying a few days with us, down from Inverness, so my wife Hyesoon and I planned for the three of us to take in a traditional New Years Eve Viennese Waltz concert at the Barbican, from 7.30-9.15pm, followed by dinner in a classy Belgravia Thai restaurant, the Mango Tree, from 10.30-12.30pm. For some daft reason I decided the trip would be easiest if we drove the whole way. I thought that would allow the greatest flexibility, and that my trusted hi-tech TomTom satnav would guide me through any unfamiliar routes. And in any case, there should be plenty of time for the various parts of the journey: when I checked in advance, both TomTom and Google Maps said that the journey from the Barbican to the Mango Tree would take only 15 minutes. Or so I thought.

The first sign of trouble was in the initial part of the whole journey, from my home in Surbiton (South West London) to the Barbican (East Central London). TomTom predicted 45 minutes. We left home at 6.oo, giving us 90 minutes for that trip. I wasn’t sure of the way, but the traffic got thicker and thicker and slower and slower. And slower and slower. After several route recalculations and inspired changes of plan, we finally made it to our seats in the concert hall just as the audience were giving an applause to welcome the leader of the orchestra to the stage. Talk about last minute! Happily, the music was glorious.

Less happily, the concert which I had been told (by Barbican office staff, several days earlier) would finish at 9.15, actually went on till 10pm, by the time the second encore finished. We rushed to the car park to get out ahead of the main crowds, and set off on our anticipated 15 minute journey across central London.

However, the police had erected roadblocks all over the place, to force road traffic away from central London locations such as Trafalgar Square and Parliament Square. Time and again, I had to drive in a different direction to the one I intended. My car wasn’t the only one that was frustrated by the re-routings. The few roads that were still open were jammed to a snails pace. We rang the Mango Tree to say we might be, err, 20 minutes, or maybe even 40 minutes late. Come anyway, they said. We’re trying, I said. After painfully slow progress along Marylebone Road, Edgware Road and Park Lane, we finally reached the restaurant, just as the DJ was starting the countdown to the chimes signalling 1.1.2009. (Talk about last minute…)

Not unreasonably, the tempo and ambience in the restaurant by that time was a lot noisier than would normally accompany selecting starters from the menu. The staff did a fine job in the circumstances. In the end, I managed at least a grin as the sound system was belting out Mick Jagger’s “Honky Tonk Woman” at high volume. And Hyesoon and I got to our feet for some middle-aged boogie to Abba’s “Dancing Queen”. My mum said, it was quite an experience. Many thanks to both my mum and Hyesoon for being (mainly!) calm and supportive through all this trauma.

Being stuck for so long in slow-moving traffic gives you time for a lot of “if only” thinking. If only I had followed general advice and taken public transport rather than my car. If only I had realised that most routes would be blocked, and had started on a wide berth earlier. If only we had booked a venue closer to home. And, if only my satnav was hooked up with current road and traffic information, rather than relying on hard-wired map information that failed to match the reality of the moment. As a fan of Agile, that’s a lesson I ought to have learned already.

I wonder how many other aspects of life will, in 2009, suffer from being similarly misguided by automated or semi-automated responses that are based on out-of-date conceptual maps?

Our collective infrastructure is continuing to change in many ways, probably more than we expect. The market landscape is highly fluid. Items of our economic infrastructure that we thought we could take for granted, are falling away while our attention is focused elsewhere. Woe betide us if we stick on auto-pilot, trusting that our past processes are sufficient to guide us safely through the new terrain.

A few days ago, Kevin Kelleher forecast in GigaOm that 2009 could be “The year of the hacker“. In short, our tough new economic climate could result in new creativity from talented people who are struggling in their old jobs (or who lose their jobs completely):

I don’t mean to downplay how hard it is to be unemployed. But with tens of thousands of skilled tech workers being kicked into a hostile job market, the effects could prove to be positive for the Internet and its community over the long term…

I wonder what kind of creativity could be unleashed by workers who, though deprived of a steady paycheck, are freed from tedious tasks. Some could come up with new ideas that help vault the web to a more advanced stage. Others may make micro-contributions that are equally powerful in aggregate. Such creativity could then foster an entirely new generation of startups, which would eventually lure away some of those who had remained at steady jobs all along…

Of course, money will be hard to come by for such labors of love. Some of the best ideas since the last downturn have failed to find a viable business model. A gift economy would be an especially profitless form of innovation. But that notion lies at the heart of the hacking ethic.

Building on Kelleher’s ideas, Brad Feld of Mobius Venture Capital plausibly suggests that 2009 could see the “re-rise of open source“:

Kevin Kelleher’s article on GigaOm this morning titled 2009: Year of the Hacker made me think back to the rise of open source after the Internet crash of 2001. In the aftermath of the crash, many experienced software developers were out of work for a period of time ranging from weeks to years. Some of them threw themselves into open source projects and, in some cases, created their next job with the expertise they developed around a particular open source project.

We are still in a tense and ambiguous part of the current downturn where, while many developers are getting laid off, some of them are immediately being picked back up by other companies that are in desperate need for them. However, many other developers are not immediately finding work. If the downturn gets worse, the number of out of work developers increases.

If they take a lesson from the 2001 – 2003 time frame, some subset of them will choose to get deeply in an open source related project. Given the range of established open source projects, the opportunity to do this today is much more extensive than it was seven years ago. In addition, most software companies – especially Internet-related ones – now have robust API’s and/or open source libraries that they actively encourage third parties to work with for free. The SaaS-based infrastructure that exists along with maturing source code repositories add to the fun. The ability to hack something interesting together based on an established company’s infrastructure is omnipresent and is one of the best ways to “apply for a job” at an interesting company…

When plans for the Symbian Foundation were announced in June last year, we did not foresee the substantial economic downturn and the fact that many fine software developers would, through no fault of their own, find themselves out of work. This changed landscape, unexpectedly, makes it all the more important for software platforms to be sufficiently open to allow a wide number of developers to engage deeply and easily with the system. If the case was strong last June to open source the Symbian Platform, it has, unexpectedly, become even stronger in the intervening seven months.

If 2009 will be the year of the hacker and/or the year of the re-rise of open source, it changes the priorities of all software systems, to become friendlier to hackers and open source practitioners. The systems that can best leverage this new latent talent pool will be the ones that are the most likely to be flying high in 12 months time when the chimes of Big Ben ring out 2009 and herald in yet another new year. (But on that occasion, I will definitely not be driving anywhere near Central London!)

28 December 2008

The best book I read in 2008

Filed under: books, culture, happiness, psychology — David Wood @ 1:56 pm

I’ve had the pleasure to read through several dozen fine books in 2008 – here’s a partial list of reviews. (One reason this list is “partial” is because I often neglected to assign the label “books” to relevant postings.)

As the year draws to a close, I’m ready to declare one book as being the most memorable and thought-provoking that I’ve read in the entire year: “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” by University of Virginia Associate Professor Jonathan Haidt. It’s a tour de force in positive psychology.

The endorsement printed on the front cover is probably reason enough for anyone to read this book: “For the reader who seeks to understand happiness, my advice is: Begin with Haidt“. The endorsement is from Martin Seligman, Professor of psychology, University of Pennsylvania.

The stated purpose of the book is to consider “ten great ideas” about morality and ethics, drawn from Eastern and Western religious and philosophical traditions, and to review these ideas in the light of the latest scientific findings about the human condition. Initially, I was sceptical about how useful such an exercise might be. But the book quickly led me to set aside my scepticism. The result is greater than the sum of the ten individual reviews, since the different ideas overlap and reinforce.

Haidt declares himself to be both an atheist and a liberal, but with a lot of sympathy for what both theists and conservatives try to hold dear. In my view, he does a grand job of bridging these tough divides.

Haidt seems deeply familiar with a wide number of diverse traditional thinking systems, from both East and West. He also shows himself to be well versed in many modern (including very recent) works on psychology, sociology, and evolutionary theory. The synthesis is frequently remarkable. I found myself re-thinking lots of my own worldwide.

Here are some of the age-old themes that Haidt evaluates:

  • The mind is divided against itself – “the spirit is willing but the flesh is weak”
  • Perception is more important than external substance – “Life itself is but what we deem it”
  • Humans tend to be rank hypocrites – we notice the speck in others’ eyes, without paying attention to the plank in our own
  • The golden rule of “reciprocity” lies at the heart of all morality
  • Personal fulfilment depends on giving up attachments
  • Personal happiness is best pursued by seeking to cultivate “virtues”
  • Lives need suffering and setbacks to allow people to reach higher states of development
  • Religion plays a unique role in creating cohesive cultures.

To be clear, the evaluation of these themes typically shows both their prevailing strengths and their limitations. (It was a bit of a jolt every time I read a sentence in the book that said something like “What the Buddha failed to appreciate is…“)

The ideas that I have taken away from the book include the following:

  • A vivid metaphor of the mind as being a stubborn elephant of automatic desires, with a small conscious rider sat on top of it (as illustrated in the picture on the front cover of at least some editions of the book);
  • In any battle of wills, the elephant is bound to win – but there are mechanisms through which the rider can distract and train the elephant;
  • The most reliable mechanisms for improving our mood are meditation, cognitive therapy, and Prozac;
  • There are hazards (as well as benefits) to promoting self-esteem;
  • Although each person has a “happiness set point” to which their emotional status tends to return after some time, there are measures that people can take to drive their general happiness level higher – this includes the kind of personal relations we achieve, the extent to which we can reach “flow” in our work, and the extent to which different “levels” of our lives “cohere”;
  • Alongside the universally recognised human emotions like happiness, sadness, surprise, fear, disgust and anger, that have typically been studied by psychologists, there is an important additional emotion of “elevation” that also deserves study and strengthening;
  • The usual criticisms of religion generally fail to do justice to the significant beneficial feelings of community, purity, and divinity, that participation in religious activities can nurture – this draws upon some very interesting work by David Sloan Wilson on the role of religions as enabling group selection between different human societies.

Despite providing a lot of clarity, the book leaves many questions unresolved. I see that Haidt is working on a follow-up, entitled “The Righteous Mind: Why good people are divided by politics and religion“. I’m greatly looking forward to it.

Footnote: “The happiness hypothesis” has its own website, here.

27 December 2008

Revocation infrastructure

Filed under: revocation, Symbian Signed — David Wood @ 1:30 pm

In the quest to stop bad applications from doing damage to the data or operation of a phone (or running up large bills, or otherwise adversely impacting the phone network), possible approaches divide into two main routes:

  1. Put the main focus on checking and testing software (and the originator of the software) before it is allowed to be distributed or installed;
  2. Be permissive as regards the initial distribution and installation of software, but withdraw (or “revoke”) these permissions if it becomes clear that the software has bad effects.

It seems to be the consensus view that it is impractical (if not impossible) to reliably identify bad software by any prior checking system. These checks will always fail on at least one criteria:

  • The tests will be insufficient to cover all usage conditions; applications which work well on some handsets on some networks may well go wrong on other handsets or other networks;
  • Any attempt to make the tests more reliable will introduce unacceptable time delays and cost.

The best that an application checking system can hope to accomplish is a quick sanity test – to spot significant errors. Inevitably, this means that some bad software will slip through the system. As a result, any anti-malware system on mobile phones needs to consider at least some revocation component.

In principle, here’s what revocation could accomplish:

  1. The process of releasing software (including alpha and beta versions) could be relatively quick and painless;
  2. An application that is subsequently found to generate problems on phones could be removed from distribution lists and application stores, to prevent anyone else from installing it;
  3. Messages could be sent to all phones on the network with the effect that users who have already installed the application could be warned about these problems – and given the opportunity to uninstall it;
  4. In more extreme cases, these messages could cause the applications to be automatically uninstalled, without waiting for the approval of the user;
  5. In yet other cases, the developer who signed the application could be barred from signing any more applications – this could be appropriate in cases where the developer has been caught out making pirated zero-cost versions of commercial software.

This picture is attractive. However, we need to be aware that it relies on the existence of a “revocation infrastructure”. One part of this infrastructure is the reliable identification of an application. This is accomplished via tamperproof digital signing. However, this is only the start of what’s needed for revocation to work.

It was because of the lack of a developed revocation infrastructure that the original Symbian Signed scheme followed route 1 above – Put the main focus on checking and testing software (and the originator of the software) before it is allowed to be distributed or installed – rather than route 2 – Be permissive as regards the initial distribution and installation of software, but withdraw (or “revoke”) these permissions if it becomes clear that the software has bad effects.

Here are some of the issues with the mechanics of revocation:

  1. By default, checking at install time for revoked certificates is currently turned off for most (if not all) shipping Symbian phones;
  2. The user would in principle have to pay for the data traffic to check for revocation;
  3. Operators ought ideally to agree on something like a free dedicated access point which is supported across networks while roaming, etc., before it’s acceptable to turn this on for the majority of users;
  4. Revocation checking is done on most phones at software install time, there is limited current support for push revocation;
  5. If the revocation checking was defaulted to on, the user could still turn it off for most (if not all) devices;
  6. Software that deliberately or accidentally broke PlatSec partitioning of processes & data could disable the revocation check.

In addition, there are some issues with the policy of revocation:

  1. There is bound to be controversy over who has the authority to decide to revoke a certificate;
  2. Some applications that run without problems on some networks may cause problems to other networks; does this mean that revocation may need to be specific to individual networks?
  3. Some applications that users like and admire may be viewed as malware by other users;
  4. For example, users may have entered considerable amounts of data into an application, that is subsequently forcibly uninstalled due to being revoked; users may complain about no longer have access to their data;
  5. Some application writers may seek to contest decisions to declare their software as malware.

I’m not saying these issues are insurmountable. There are candidate solutions for all these issues. But I do want to point out that revocation has its own costs.

My own view nowadays is that even a partially working revocation would probably still be a better system than the current reliance on centralised testing of applications before they can be distributed.

By “partially working revocation” I mean a system that works by community reviews. Users who notice problems with applications would be encouraged to publicise these issues, so that the community as a whole can weigh up the evidence. Popular application stores would take this information into account in the material provided to describe the applications available for download.

In principle, users would be willing to pay money for a premium service from application stores, as follows:

  • The application store remembers which users have downloaded which applications;
  • If an application is subsequently deemed to be problematic (on, say, particular phones), then relevant users would be sent messages alerting them of this situation.

In some ways, this premium service would be akin to the anti-virus monitoring solutions that are already available from some security specialist companies – although the implementation mechanism would be different.

Note finally that I’m not advocating opening all functionality to all developers, without any vetting. I believe that functionality such as AllFiles, DRM, and TCB, still needs to be carefully controlled, and cannot fall under a system of “use until revoked”. One argument in support of this view has already been mentioned (point 6 in the list above of issues with the mechanics of revocation).

25 December 2008

Why good people fail to change bad things

Filed under: books, change, complacency, leadership, urgency — David Wood @ 3:22 pm

2008 has been a year of great change in the Symbian world. Important change initiatives that were kicked off in previous years have gathered speed.

2008 has also seen change and trauma at many other levels, throughout the mobile industry and beyond. And the need for widespread change still remains. Daily – perhaps hourly – we encounter items that lead us to wonder: Why isn’t someone getting this changed? Why isn’t someone taking proper care of such-and-such a personal issue, family issue, social issue, organisational issue, political issue, educational issue, environmental issue, operating system issue, ecosystem management issue, usability issue, and so on?

I’ve attended quite a few “change facilitation workshops” and similar over the last 24 months. One thinker who has impressed me greatly, with his analysis of the causes of failure of change initiatives – even when good people are involved in these initiatives – is Harvard Business School Professor John Kotter. Kotter describes a series of eight steps which he recommends all significant change initiatives to follow:

  1. Build a sense of urgency
  2. Establish an effective guiding coalition
  3. Create a clear, appealing vision
  4. Communicate, communicate, communicate
  5. Remove obstacles (“empower”)
  6. Celebrate small wins
  7. Follow through with wave after wave of change
  8. Embed the change at the cultural level.

Lots of other writers and speakers have their own different ways of describing the processes of successful change initiatives, but I find Kotter’s analysis to be the most insightful and inspiring.

The main book that covers this eight stage process is “Leading Change” – a book that must rank high in the list of the most valuable business books ever written.

Subsequently, Kotter used the mechanism of an easily-read “cartoon book”, “Our Iceberg Is Melting: Changing and Succeeding Under Any Conditions“, in order to provide a gentle but compelling introduction to his ideas. It’s a fable about penguins. But it’s a fable with real depth. (I noticed it and purchased a copy in the Inverness airport bookshop one day, and had finished reading it by the time my plane south landed at Gatwick. I was already resolved to find my copy of “Leading Change” and re-read it.)

As Kotter emphasises, the steps in the eight-stage change leadership process have mirror images which are the main eight reasons why change initiatives stumble:

  1. Lack of a sufficient sense of urgency;
  2. Lack of an effective guiding coalition for the change (an aligned team with the ability to make things happen);
  3. Lack of a clear appealing vision of the outcome of the change (otherwise it may seem too vague, having too many unanswered questions);
  4. Lack of communication for buy-in, keeping the change in people’s mind (otherwise people will be distracted back to other issues);
  5. Lack of empowerment of the people who can implement the change (lack of skills, wrong organisational structure, wrong incentives, cumbersome bureaucracy);
  6. Lack of celebration of small early wins (failure to establish momentum);
  7. Lack of follow through (it may need wave after wave of change to stick);
  8. Lack of embedding the change at the cultural level (otherwise the next round of management changes can unravel the progress made).

A few months ago, Kotter released yet another book on the subject of change initiatives that go wrong. Like “Our Iceberg Is Melting”, this is another slim book – only having 128 pages, and with large typeface, making it another very quick read. But, again, the ideas have real merit. This book is called “A sense of urgency“.

As the name implies, this book focuses more fully on the first stage of change initiatives. The biggest reason why significant change initiatives fail, in Kotter’s considered view, is because of a lack of:

a real sense of urgency – a distinctive attitude and gut-level feeling that lead people to grab opportunities and avoid hazards, to make something important happen today, and constantly shed low-priority activities to move faster and smarter, now.

Instead, most organisations (and most people) become stuck in a combination of complacency and what Kotter describes as “false urgency”:

  • Complacency is frequently fuelled by past successes and time-proven strengths – that may, however, prevent organisations from being fully aware of changes in circumstances, technologies, and markets;
  • False urgency involves more activity than productivity: “It is frenetic. It is more mindless running to protect themselves or attack others, than purposive focus on critical problems and opportunities. Run-run, meet-meet, talk-talk, defend-defend, and go home exhausted.”

Kotter provides a helpful list of questions to help organisations realise if they are suffering from over-complacency and/or false urgency:

  • Are critical issues delegated to consultants or task forces with little involvement of key people?
  • Do people have trouble scheduling meetings on important initiatives (“Because, well, my agenda is so full”)?
  • Is candour lacking in confronting the bureaucracy and politics that are slowing down important initiatives?
  • Do meetings on key issues end with no decisions about what must happen immediately (except the scheduling of another meeting)?
  • Are discussions very inwardly focused and not about markets, emerging technologies, competitors, and the like? …
  • Do people run from meeting to meeting, exhausting themselves and rarely if ever focusing on the most critical hazards or opportunities? …
  • Do people regularly blame others for any significant problems, instead of taking responsibility and changing? …

The centrepiece of “A sense of urgency” is a set of four tactics to increase a true sense of urgency:

  1. Bring the outside in. Reconnect internal reality with external opportunities and hazards. Bring in emotionally compelling data, people, video, sights, and sounds.
  2. Behave with urgency every day. Never act content, anxious, or angry. Demonstrate your own sense of urgency always in meetings, one-on-one interactions, memos, and email, and do so as visibly as possible to as many people as possible.
  3. Find opportunity in crises. Always be alert to see if crises can be a friend, not just a dreadful enemy, in order to destroy complaceny. But proceed with caution, and never be naive, since crises can be deadly.
  4. Deal with the NoNos. Remove or neutralise all the relentless urgency-killers: people who are not skeptics but who are determined to keep a group complacent or, if needed, to create destructive urgency.

The rest of the book fleshes out these tactics with examples (taken from Kotter’s extensive consulting and research experience) and additional checklists. To my mind, there’s a great deal to learn from here.

Footnote: Kotter’s emphasis on the topic of “real urgency” may seem to fly in opposition to one of the most celebrated messages of the literature on effectiveness, namely the principle that people should focus on matters that are important rather than matters that are merely urgent. In the renowned “first things first” language of Stephen Covey, people ought to prioritise “Quadrant two” (activities which are important but not urgent) over “Quadrant three” (activities with are urgent but not important).

To my mind, both Kotter and Covey are correct. We do need to start out by figuring what are the most important activities. And then we have to ensure that we keep giving sufficient attention to these activities. Kotter’s insight is that organisations and people can address this latter task by means of the generation of a sufficient sense of urgency around these activities. In other words, we should drive certain key targets out of Quadrant two into Quadrant one. That way, we’ll be more likely to succeed with our key change initiatives.

24 December 2008

Symbian Signed and pirated applications

Filed under: piracy, Symbian Signed — David Wood @ 9:22 pm

In the spirit of “divide and conquer” I’d like to try again to focus on just one out of the many sub-topics that whirl around discussions of Symbian Signed. On this occasion, the particular sub-topic is:

  • Is there merit in using (or modifying) Symbian Signed processes to reduce the prevalence of pirated Symbian applications?

I stated the underlying requirement as follows in “Symbian Signed basics“:

c. Reducing the prevalence of cracked software

To make it less likely that users will install “cracked” free versions of commercial applications written by third parties, thereby depriving these third parties of income.

The idea is simple enough:

  • A developer D0 creates an application A0, has it signed, and sells it for a fee
  • To avoid users making and distributing copies of that application, without paying additional fees to the developer, the developer includes an element of copy protection in the application
  • This restricts the application to run on a device identified by (say) an IMSI or an IMEI
  • Some users will be developers in their own right, who possess the programming skills to alter the application to bypass the copy-protection code, creating a cracked version A1
  • In principle, A1 can be copied and will run on a wider number of devices, thereby depriving the developer of additional income
  • However, because A1 is a tampered version of A0, the original signature is no longer valid, so A1 will fail to install.

On the other hand, any developer D1 can access the Symbian Signed mechanism to put a different signature onto the application A1, thereby completing the circumvention of the copy-protection mechanism. The lower the expense of obtaining a signature, and the easier that process becomes (for example, by removing an independent testing phase), the more likely it is that cracked but installable applications (like A1) will circulate.

This is where the requirement to “make it easier for developers to carry out widespread beta testing” comes into tension with the requirement to “reduce the prevalence of cracked software”.

OK, having laid out the context, it’s time for me to state my own opinion on the matter.

I suspect that piggy-backing on Symbian Signed is probably not the best route for a developer D0 to avoid pirate versions of their application A0 circulating. That’s for the following reasons:

  1. It seems inevitable that the Symbian Signed mechanism will continue to become cheaper and easier to operate – in order to address the huge demand to “make it easier for developers to carry out widespread beta testing”
  2. The only kinds of apps which will be difficult for cracker developers D1 to re-sign are those which make use of some high-powered capabilities (like AllFiles or DRM or TCB), which in turn only apply to a small proportion of applications like A0.

So developers D0 ought instead to seek to use other copy-protection mechanisms – such as those involving DRM.

At the same time, the pressure for users to seek free copies of applications will reduce, provided the prices levied for these applications seem reasonable to large numbers of users. In turn, one thing that will allow these prices to remain low is if the population of users buying the applications is large, and if there is an efficient marketplace mechanism (akin to the iPhone AppStore) for users to discover and purchase applications.

(Aside: One more avenue to explore is if mechanisms could be put in place for developers to earn a proportion of ongoing network data or advertising revenues from the use of their application.)

To summarise: I’d like to take the question of “Reducing the prevalence of cracked software” off the Symbian Signed discussion table. (But I remain open to being persuaded otherwise.) That table is already cluttered enough, and the more we can remove from it, the easier it will be to reach a satisfactory consensus view.

Footnote: This posting is #3 out of N I expect to be making about Symbian Signed, where N could become as large as 10.

« Newer PostsOlder Posts »

Blog at WordPress.com.