dw2

30 June 2015

Securing software updates

Software frequently goes wrong. That’s a fact of life whose importance is growing – becoming, so to speak, a larger fact of life. That’s for three reasons:

  1. Complex software is spreading more widely into items where, previously, it was present (if at all) only in simpler form. This includes clothing (“wearable computing”), healthcare accessories, “connected home” consumer goods, automobiles (“connected vehicles”), and numerous “Internet of Things” sensors and actuators. More software means a greater likelihood of software error – and a greater likelihood of being hacked (compromised).
  2. Software in these items is increasingly networked together, so that defects in one piece of software can have effects that ricochet unexpectedly. For example, a hacked thermostat can end up reporting industrial secrets to eavesdroppers on the other side of the planet.
  3. By design, modern-day software is frequently open – meaning that its functionality can be configured and extended by other pieces of software that plug into it. Openness provides the possibility for positive innovation, in the way that apps enhance smartphones, or new themes enhance a webpage design. But that same openness enables negative innovation, in which plug-ins subvert the core product. This type of problem arises due to flaws in the set of permissions that expose software functionality from one module to another.

All three of these factors – the intrinsic defects in software, defects in its network connectivity, and defects in permission systems – can be exploited by writers of malware. Worryingly, there’s a mushrooming cybercrime industry that creates, modifies, and deploys increasingly sophisticated malware. There can be rich pickings in this industry. The denizens of Cybercrime Inc. can turn the principles of software and automation to their advantage, resulting in mass-scale deployment of their latest schemes for deception, intrusion, subterfuge, and extortion.

I recently raised these issues in my article “Eating the world: the growing importance of software security”. In that article, I predicted an imminent sea-change in the attitude which users tend to display towards the possibility of software security vulnerabilities. The attitude will change from complacency into purposeful alarm. Companies which are slow to respond to this change in attitude will find their products discarded by users – regardless of how many “cool” features they contain. Security is going to trump functionality, in a way it hasn’t done previously.

One company that has long been aware of this trend is Redbend (which was acquired by HARMAN in summer 2015). They’ve been thinking hard for more than a dozen years about the dynamics of OTA (over the air, i.e. wireless) software updates. Software updates are as much of a fact of life as software bugs – in fact, more so. Updates deliver fixes to bugs in previous versions; they also roll out new functionality. A good architecture for efficient, robust, secure software updates is, therefore, a key differentiator:

  • The efficiency of an update means that it happens quickly, with minimal data costs, and minimal time inconvenience to users
  • The robustness of an update means that, even if the update were to be interrupted partway through, the device will remain in a usable state
  • The security of an update means that it will reliably deliver software that is valid and authentic, rather than some “Trojan horse” malware masquerading as bona-fide.

According to my email archives, my first meeting with representatives of Redbend was as long ago as December 2002. At that time, I was Executive VP at Symbian with responsibility for Partnering. Since Redbend was one of the new “Platinum Partners” of Symbian, I took the time to learn more about their capabilities.

One person I met in these initial meetings was Gil Cordova, at that time Director of Strategic Marketing at Redbend. Gil wrote to me afterwards, confirming our common view as to what lay ahead in the future:

Redbend deals with an enabling technology and solution for OTA updating of mobile devices.

Our solution enables device manufacturers and operators to update any part of the device software including OS, middleware systems and applications.

The solution is based on our patented technology for creating delta-updates which minimize the update package size ensuring it can be cost-effectively sent and stored on the device with little bandwidth and memory consumption. In addition we enable the update to occur within the device memory constraints ensuring no cost-prohibitive memory needs to be added…

OTA updates can help answer the needs of remote software repair and fixing to the device software, as well as streamline logistics when deploying devices…

At that time, some dozen years ago, the idea that mobile phones would have more and more software in them was still relatively new – and was far from being widely accepted as a good thing. But Redbend and Symbian foresaw the consequences, as in the final paragraph of Gil’s email to me:

All the above points to the fact that if software is a new paradigm in the industry then OTA updating is a very crucial and strategic issue that must be taken into account.

OTA has, indeed, been an important issue since that time. But it’s my view that the full significance is only now becoming apparent. As security is poised to “eat the world”, efficient and reliable OTA capabilities will grow yet further in importance. It will be something that more and more companies will need to include at the heart of their own product offerings. The world will insist on it.

A few days ago, I took a closer look at recent news from HARMAN connected services – in particular at its architecture for cybersecurity. I saw a great deal that I liked:

Secure Car

  • Domain isolation – to provide a strict separation between different subsystems (e.g. parts of the overall software system on a car), with the subsystems potentially running different operating systems
  • Type-1 hypervisor – to isolate different subsystems from hardware resources, except when such access is explicitly designed
  • Driver virtualization – to allow additional peripherals (such as Wi-Fi, cameras, Bluetooth, and GPS) to be added quickly into an existing device with the same secure architecture
  • Software update systems – to enable separate remote software management for the head (dashboard) unit, telematics (black-box) unit, and numerous ECUs (engine control units) – with a 100% success record in deploying updates on more than one million vehicles
  • State of the art FIPS (Federal Information Processing Standard) encryption – applied to the entirety of the update process
  • Intrusion Detection and Prevention systems – to identify and report any malicious or erroneous network activity, and to handle the risks arising before the car or any of its components suffers any ill-effect.

I know from my own background in designing software systems that this kind of all-points-considered security cannot be tacked onto an existing system. Provision for it needs to be designed in from the beginning. That’s where Redbend’s long heritage in this space shows its value.

The full benefit of taking an architectural approach to secure software updates – as opposed to trying to fashion security on top of fundamentally insecure components – is that the same architecture is capable of re-use in different domains. It’s therefore no surprise that Redbend software management solutions are available, not only for connected cars, but also for wearable computers, connected homes, and machine-to-machine (M2M) devices.

Of course, despite all these precautions, I expect the security arms race to continue. Software will continue to have bugs, and the cybercrime industry will continue to find ingenious ways to exploit these bugs. The weakest part of any security system, indeed, is frequently the humans involved, who can fall victim to social engineering. In turn, providers of security software are seeking to improve the usability of their systems, to reduce both the likelihood and the impact of human operator error.

This race probably has many laps to run, with new surprises ahead on each lap. To keep ahead, we need allies and partners who constantly look ahead, straining to discern the forthcoming new battlegrounds, and to prepare new defences in sufficient time. But we also need to avail ourselves of the best present tools, so that our businesses have the best chance of avoiding being eaten in the meantime. Figuring out which security tools really are best in class is fast becoming a vital core competency for people in ever-growing numbers of industries.

Footnote: I was inspired to write this post after discussions with some industry colleagues involved in HARMAN’s Engineering a Connected Life program. The views and opinions expressed in this post are my own and don’t necessarily represent HARMAN’s positions, strategies or opinions.

6 March 2010

Fragmentation beyond good

Filed under: architecture, developer experience, fragmentation — David Wood @ 6:53 pm

Hmm, I thought I’d finished writing about mobile fragmentation, but the topic keeps rumbling on.

Some comments from distinguished industry colleagues prompt me to address this topic one more time.

My fellow Symbian co-founder Juha Christensen suggests “Fragmentation is good“:

I believe that already in 2011 we will see smartphones outsell “lesser” phones. By the end of 2012, there will be an installed base of over one billion smartphones. That year along, over 600 million smartphones will be sold worldwide.

One of the things that makes phones so different from PCs has to do with micro-segmentation. Like we’re getting used to hearing “there’s an app for everything,” we are a year or so away from “there’s a phone for every person”. Sony Ericsson’s newly released XPERIA 10 Mini, is a good examples of what’s to come. Intense segmentation of the market, opening up thousands of niches. Just like we all don’t drive the same car, the days are over where everyone in Silicon Valley either has the same BlackBerry, iPhone or Nexus One.

With over a billion people using smartphones, we will see thousands of micro-segments being serviced by thousands of different designs and usage model…

So far, I fully agree with Juha.  However, micro-segmentation doesn’t need to imply platform fragmentation.

With a good architecture, users can get lots of choice, even though there’s an underlying commonality of developer APIs.

As far as users are concerned, the platform supports multiple different interfaces – even though, as far as developers are concerned, there’s a single programming interface.

But Juha continues:

And this is where the goodness of fragmentation comes in. An operating system design comes with inherent restrictions. There is a need to make sure apps run the same way on all devices, that aspect ratios are more or less the same and a ton of other restrictions aimed a making the experience really good.

If one operating system was going to serve all form factors, all market segments, all use cases and all price points, the market would start trending towards a lowest common denominator…

This is where I disagree.  A well-designed mobile operating system can support a vast range of different kinds of devices.  There doesn’t need to be a “lowest common demoninator” effect.

There are, of course, some important benefits of competition between multiple different mobile operating systems.  But it’s a matter of degree.  If there’s too much competition or too much fragmentation, chaos ensues.

That’s why I prefer to say that the amount of fragmentation we have today, in the mobile space, is “beyond good”.

Sebastian Nyström also commented on the previous discussion, via Twitter:

Again we see the theme: fragmentation is part of a chain of cause and effect that has good results for consumers.  And, to an extent, I agree.  But only to a degree.

If the current amount of fragmentation is good, does that mean that twice as much fragmentation will be twice as good for consumers?  Or that ten times as much fragmentation will be ten times as good for consumers…?

If fragmentation is unconditionally good for consumers, should the designers of Qt (to pick, as an example, one important intermediate mobile platform) deliberately fragment it, into several different incompatible versions?

Clearly, it’s a question of degree.  But what degree is optimal?

Bruce Carney – former head of the Symbian Developer Network – raised the following points:

My $0.02 worth: don’t disagree that the industry should invest energy in standardization, but just cannot see the set of circumstances (or benevolent dictatorship) that will drive it as the value chain is too complex and fragmentation is a nice control point for too many actors…

So, I agree with Richard’s article. I have spend 10 years listening to developers bleet on about fragmentation, and if I could give them a simple message it would be “deal with it”, it allows you to find a niche and exploit it. Without fragmentation there would be a small number of winner-take-alls and most of you wouldn’t exist.

Yes, there’s good sense in telling developers to “deal with it”.  But there’s danger in that approach, too.

I’m reminded of an ongoing discussion that recurred time and again about strategy towards developers in Symbian:

  • Should we try to minimise compatibility breaks (such as between Symbian OS v8 and v9), or should we just tell developers to “deal with it”?
  • Should we try to minimise platform fragmentation, or should we just tell developers to “deal with it”?

The argument that developers should just accept things is that, after all, there was a big market awaiting them, with Symbian devices.  The pain of dealing with the inconsistencies (etc) of the Symbian world would be worth it.

However, history threw up new competitors, who had significantly simpler development systems.

And that’s a reminder, at a different level, for everyone preaching complacency about today’s mobile developer systems.  We need to remember that developers have choices.  Instead of working on mobile projects, they may well choose to work on something quite different instead.  The mobile opportunity is huge, but it’s by no means the only opportunity in town.  Those of us who want the mobile industry to thrive should, therefore, be constantly looking for ways to address the pain points and friction that mobile developers are experiencing.

10 February 2010

The mobile multitasking advantage

Filed under: Android, applications, architecture, iPhone, multitasking, Psion, universities — David Wood @ 11:48 am

How important is it for a mobile device to support background multitasking?

Specifically, how important is it that users can install, onto the device, applications which will continue to run well in background whilst the user is simultaneously using the device for another purpose?

Humans are multitasking creatures.  We get involved in many activities simultaneously: listening to music, browsing the web, holding conversations, taking notes, staying on the alert for interruptions… – so shouldn’t our mobile devices support this model of working?

One argument is that this feature is not important.  That’s because the Apple iPhone fails to offer it, and the sales of the iPhone don’t seem to have suffered as a result.  The applications built into the iPhone continue to operate in background, but downloaded apps don’t.  iPhone apps continue to sell well.  Conclusion: mobile multitasking has little importance in the real world.  Right?

But that’s a weak argument.  Customer sentiment can change.  If users start talking about use cases which the iPhone fails to support – and which other smartphones support well – then public perception of the fitness of the iPhone system software could suffer a significant downturn.  (“iPhone apps – they’re so 2009…”)

How about Android?  That offers background multitasking.  But does it do it well?

My former colleague Brendan Donegan has been putting an Android phone to serious use, and has noticed some problems in how it works.  He has reported his findings in a series of tweets:

I say, with all honesty that Android’s multitasking is a huge travesty. Doesn’t even deserve to be called that

Poor prioritisation of tasks. Exemplar use-case – Spotify [music playing app] + camera

Spotify will jitter and the photo will be taken out of sync with flash, giving a whited out image

Symbian of course handles the same use case flawlessly

Android really is just not up to doing more than one ‘intensive’ task at a time

Even the [built-in] Android music player skips when taking a photo

(Brendan has some positive remarks about his Android phone too, by the way.)

Mark Wilcox suggests a diagnosis:

sounds like the non-real-time, high interrupt latency on Linux is causing some problems in multimedia use cases

Personally, I find this discussion fascinating – on both an architecture level and a usability level.  I see a whole series of questions that need answers:

  1. Are these results applicable just to one Android phone, or are they intrinsic to the whole platform?
  2. Could these problems be fixed by fairly simple software modifications, or are they more deeply rooted?
  3. How do other mobile platforms handle similar use cases?  What about feature phone platforms?
  4. How important is the use case of playing music in background, while taking a photograph?  Are there other use cases that could come to be seen as more significant?

Perhaps this is a good topic for a university research project.  Any takers?

(Related to this, it would be interesting to know more about the background processing abilities of modern feature phones.  For example, it used to be the case that some feature phones would discard the contents of partially written text messages if there was an incoming voice call.  Has anyone looked into this recently?)

Regardless of the merits of these particular use cases, I am convinced that software responsiveness is important.  If the software system is tied up attending to task A when I want it to do task B, I’m frustrated.  I don’t think I’m alone in this feeling.

My 1990’s Psion PDA typically runs more than a dozen apps in parallel (several word processors, spreadsheeets, databases, plus an individual agenda, tube map app, calculator, and so on) and switches instantly between them.  That sets my baseline expectation.

Here’s another mobile use case that’s on my mind a lot these days.  It applies, not to a PDA or mobile phone, but to my laptop.  It’s not (I think) a device problem, but a wider system problem, involving network connectivity:

  • I frequently find myself in mobile situations where I’m browsing websites on my laptop (for example, on the train), and the pages take ages to load;
  • The signal indicator on the built-in wireless modem app says there’s a strong signal, but for some reason, wireless traffic is squeezed;
  • I sit watching empty tabs on my Firefox browser, waiting and waiting and waiting for content to appear;
  • In frustration, I’ll often open another tab, and try to visit the BBC website – to rule out the possibility that the server for the other web pages(s) has gone down – but that gives me another blank page;
  • Eventually, things recover, but in the meantime, I’ve been left twiddling my thumbs.

When I switch to a WiFi connection instead of a cellular connection, things are usually better – though I’ve had the same bitter experience with some WiFi hotspots too (for example, in some Starbucks coffee shops).

So what should the highest priority be for system architects to optimise?  Responsiveness comes high on my own wishlist.  I recognise that this will often require changes in several parts of the software system.

2 February 2010

Cutting edge computing science research for the real-world

Filed under: architecture, computer science, Google, universities — David Wood @ 11:54 pm

It is an amazing time in computer science.

This is a field that, while it is about 50 years old, has more opportunity today than it has ever had, by a large factor.

These were among the opening statements made by Google’s VP of Research & Special Initiatives, Alfred Z. Spector, in a colloquium a couple of weeks ago to the Computer Science & Engineering faculty of the University of Washington.  A video of the presentation is available from the University of Washington CSE website.

I previously mentioned this video at the tail end of a previous blogpost, “In praise of hybrid AI“.  The video is full of interesting comments about the direction of computer science.

As context, Spector mentioned “four application areas in flux today“:

  • publishing, education, healthcare, and government.

He also mentioned three “systems areas evolving“:

  • ubiquitous high performance networking, distributed computing, and new end-user devices.

This provided a prelude to “three truly big results brewing“:

  1. Totally transparent processing
  2. Ideal distributed computing
  3. Hybrid, not Artificial Intelligence

It’s worth highlighting some points about each of these “big results”.  In all cases, Google seek to follow a quantitative approach, looking at large sets of data, and checking results as systems are incrementally changed.  As Spector said, “more data is better…”

1. Totally transparent processing

Spector spelt out a vision encompassing each of four dimensions: processing should be “effectively transparent”:

  • Across all types of end-user access devices,
  • across all human languages (both formal and informal),
  • across all the modes of  information (eg text, images, audio, video, sensor data, maps, timelines),
  • and across every body of knowledge (both online and offline).

In this vision:

  • There should be “no dependence or occlusions because something has got in the way” or is in the wrong format;
  • There should be “fluidity across all these forms”.

Some subsets of this grand vision include solving “voice to text”, “image recognition”, “find similar images”, and “language translation”.  Spector claimed that progress was being made across many of these sub-problems.

2. Ideal distributed computing

Spector pointed out that

Distributed computing is 30 years old, but, not very deeply understood until recently;

There was a limitation of understanding of (truly) large-scale, open integrated distributed systems.

Particular aspects of distributed systems that had not been deeply understood included:

  • Requirements for systems in which the application needs (and APIs) are not known in advance;
  • Systems with 10^6 or even 10^7 processes, with consequent enormous complexity.

Spector claimed that – as in the case of transparent processing – “there has been lots of incremental progress done with distributed systems, picking away at problem areas”.

Improvements that can be expected for huge distributed systems of computers, arising from computer science research, include:

  • Online system optimisation;
  • Data checking – verifying consistency and validating data/config files;
  • Dynamic repair – eg find the closest feasible solution after an incident (computer broke down);
  • Better efficiency in energy usage of these systems;
  • Improvements in managing security and privacy.

3. Hybrid, not Artificial Intelligence

Hybrid intelligence is like an extension of distributed computing: people become part of the system that works out the answers.

Spector said that Google’s approach was:

To see if some problem can be solved by people and computers working together.

As a familiar example, Search doesn’t try to offer the user only the one best result.  It provides a set of results, and relies on the user picking answers from the list generated by the computer.

Hybrid intelligence can be contrasted with AI (artificial intelligence):

  • AI aims at creating computers as capable as people, often in very broad problem domains.  While progress has been made, this has turned out to be very challenging;
  • Instead, it has proven more useful for computers to extend the capability of people, not in isolation, and to focus on more specific problem areas.

Computer systems can learn from feedback from users, with powerful virtuous circles.  Spector said that aggregation of user responses has proven extremely valuable in learning, such as:

  • feedback in ranking of results, or in prioritising spelling correction options;
  • semi-supervised image content analysis / speech recognition / etc.

(This idea has evolved over time, and was previously known as “The combination hypothesis”: computers would become smarter if different methods of learning can be combined.  See for example the 2003 article “IBM aims to get smart about AI” from a time when Spector worked at IBM.  It’s good to see this idea bearing more and more fruit.)

Selected questions and answers

A couple of the questions raised by the audience at the end of the lecture were particularly interesting.

One questioner asked if Google’s guidelines for research projects specified any “no-go areas” that should be avoided.  Spector answered:

No one wants a creepy computer.  So the rule is … “don’t be creepy”.

(Which is an unusual twist on “don’t be evil”!)

Spelling this out in more detail:

  • Google aim to apply extremely neutral algorithms to ranking and crawling;
  • They want systems that are very responsive to users’ needs, without being in any way creepy;
  • Views on “what is creepy” may change over time (and may be different in different parts of the world).

A second audience member asked if there are risks to pursuing a quantitative, evolutionary approach to computer science problems.  Spector answered:

  • Yes, the research might get stuck in a local maximum;
  • So you can’t do everything “strictly by the numbers”.  But having the numbers available is a great guide.

Ongoing research

As I viewed this video, part of my brain told me that perhaps I should return to an academic life, in the midst of a computer science faculty somewhere in the world.

I share Spector’s conclusion:

It’s a time of unprecedented diversity and fertility in computer science – and amazing challenges abound;

The results from computer science should continue to make the world a better place.

Spector pointed out that key research challenges are published on the Google Research Blog.  Examples he listed included:

  • increasingly fluid partnership between people and computation;
  • fundamental changes in the methods of science;
  • rethinking the database;
  • CS+X, for all X (how Computer Science, CS, can assist and even transform other fields of study, X);
  • computing with ultra-low power (eg just ambient light as a power source).

Stop press: Google’s Focused Research Awards

Coincidentally, I see that Google have today made a new announcement about their support for research in specific areas of computer science, at a small number of universities worldwide.  The four areas of research are:

  • Machine Learning
  • Use of mobile phones as data collection devices for public health and environment monitoring
  • Energy efficiency in computing
  • Privacy.

It looks like “energy efficiency in computing” is receiving the largest amount of funding.  I think that’s probably the right choice!

6 February 2009

Reviewing architecture

Filed under: architecture, Symbian Foundation — David Wood @ 10:25 pm

I spent two days earlier this week in the company of a group of highly experienced software architects. As you may be aware, software architects are a special breed of people. They’re the people who enjoy worrying about the fundamental technical design of a software system. They try to identify, in advance:

  • The best partitioning of the overall system into inter-connected parts (“divide and conquer“);
  • The approach to the design that will preserve the right amount of flexibility for future evolution of the system (“design for change: expect the unexpected“);
  • The technical decisions that will have the biggest impact on the lifetime success of the system (“finding the biggest bang per buck“);
  • The aspects of the design that will become the hardest to change (and which, therefore, are the most urgent to get right);
  • The software development processes that will be the most sacrosanct in the creation of the system (the processes that even the best software engineers will be obliged to follow).

People who get these decisions right are among the most valuable members of the entire project team.

The software architects that I met over these two days were employees from eight of the initial board member companies of the Symbian Foundation. This group of architects has been meeting roughly once a month, for around the last six months, to carry out preparatory work ahead of the formal launch of the Symbian Foundation. The grouping goes by the name “Architecture and software collaboration working group”. Because that’s a bit of a mouthful, it’s usually abbreviated to ASW WG.

It’s not the only such working group. For example, there’s also the FRR WG (looking at Feature, Roadmap and Releases), the FOL WG (Foundation Operational Launch), the FG WG (Foundation Governance), and the IMC WG (Interim Marketing & Communications). In each case, the working group consists of personnel from the initial board member companies, who meet regularly (face-to-face or on a conference call) to progress and review projects.

Several of these working groups will transition into ongoing “councils” when the Symbian Foundation is launched. For example, the ASW WG will transition into the Architecture Council. The Symbian Foundation councils are being formed with the purpose to support the foundation community and grow the competitiveness of the Symbian platform by:

  • Identifying high-level market, user and technical requirements;
  • Soliciting contributions that address those requirements;
  • Coordinating community contributions into regular platform releases;
  • Providing transparency for all community members regarding future platform developments.

The four main councils can be summarised as follows:

  • The Feature and Roadmap Council invites proposals for contributions from the community and seeks to coordinate new contributions into a unified platform (or tools) roadmap;
  • The Architecture Council invites and reviews technical solutions for new contributions in order to ensure the architectural integrity, backward compatibility and fitness-for-purpose of enhancements to the platform;
  • The User Interface Council invites and reviews descriptions of new user interface elements and develops guidelines to help ensure high quality device user experiences;
  • The Release Council coordinates the integration of contributions into stable and timely platform and tools releases.

As I said, I attended this week’s meeting of the ASW WG. Personnel from eight of the initial board member companies were present. It was evident that some of the architects were already on very familiar terms with each other – they’ve worked together on previous Symbian projects over the years. Other participants spoke less often, and seemed to be newer to the group – but when they did speak up, their contribution was equally pertinent.

The meeting had a full agenda. About half of the time was devoted to collectively reviewing (and in some cases reworking) documents that are to be published on the Symbian Foundation web infrastructure around the time of the launch of the organisation. These documents included:

  • The operating charter for the Architecture Council
  • Foundation device software structuring principles
  • Template and Example for Technical Solution Descriptions
  • Foundation Device Compatibility Verification Process
  • Reference Execution Environment Selection Process
  • Recommended practice in the use of the software collaboration tools chosen by the Symbian Foundation – including the SCM (Mercurial) and Issue Tracking (Bugzilla) tools.

The rest of the meeting involved:

  • A discussion of the XML metadata files which are to be maintained (by package owners) for each component in the whole system
  • A review of progress of the project to create the infrastructure and web services which will be accessed by foundation members and by the general public following the launch of the foundation
  • A discussion of the principles for identifying and supporting package owners.

From time to time, the gathering briefly bordered on the surreal. For example, it was debated whether packages should most accurately be described as “collections of collections of packages” instead of “collections of packages”. And there was a serious discussion of whether “vendor supported environment” should gain a hyphen, to become “vendor-supported environment”. But this kind of intense scrutiny is what you’d expect from the highly analytic individuals in attendance – especially when you realise that it’s the desire of these architects to communicate their design ideas as clearly and unambiguously as possible.

(I say all this from the standpoint of someone who had “Software Architect” as the job title on my business cards for several years in the early 1990s.)

Indeed, there was a lot of good-natured ribbing between the attendees. The remark “You might hate me for suggesting this, but…” was interrupted by the rejoinder, “Don’t worry, I already hate you”, followed by laughter.

The meeting became particularly animated, near the end, during the review of the project to create the Symbian Foundation web infrastructure. It became clear to the working group members that the documents they had long debated and refined would soon become published to a much, much wider audience. All the months of careful preparation will culminate in what is anticipated to be a flurry of interest from Symbian Foundation members in the proposals and votes that will take place at the first meetings of the councils:

  • Although there will be at most 12 voting members on any of these councils, the agendas and supporting documents will be made visible to all Symbian Foundation members in advance of council meetings;
  • These members will be able to make their own opinions known through channels such as mailing lists;
  • Over time, the members who repeatedly raise the most insightful comments and suggestions about the business of a council will be invited to formally join that council (and will gain voting rights).

28 January 2009

Package Owners contemplating the world ahead

Filed under: architecture, Nokia, packages, passion, Symbian Foundation — David Wood @ 3:13 pm

I’ve just spent two days at the very first Symbian Foundation “Package Owners workshop”, held in a Nokia training facility at Batvik, in the snow-covered countryside outside Helsinki. The workshop proved both thought-provoking and deeply encouraging.

In case the term “package owner” draws a blank with you, let me digress.

Over the last few years, there have been several important structural rearrangements of the Symbian OS software engineering units, to improve the delivery and the modularity of the operating system code. For example, we’ve tracked down and sought to eliminate instances where any one area of software relied on internal APIs from what ought to have been a separate area.

This kind of refactoring is an essential process for any large-scale fast-evolving software system – otherwise the code will become unmaintainable.

This modularisation process is being taken one stage further during the preparation for opening the sources of the entire Symbian Platform (consisting of Symbian OS plus UI code and associated applications and tools). The platform has been carefully analysed and divided up into a total of around 100 packages – where each package is a sizeable standalone software delivery. Each package will have its own source code repository.

(Packages are only one layer of the overall decomposition. Each package is made up of from 1 to n component collections, which are in turn made up of from 1 to n components. In total, there are around 2000 components in the platform. Going in the other direction, the packages are themselves grouped into 14 different technology domains, each with a dedicated “Technology Manager” employed by the Symbian Foundation to oversee their evolution. But these are stories for another day.)

Something important that’s happened in the last fortnight is that package owners have been identified for each of the packages. These package owners are all highly respected software engineers within their domain of expertise.

We’re still working on the fine detail of the description of the responsibilities of package owners, but here’s a broad summary:

  • Publish the roadmap for their package
  • Have technical ownership for the package
  • Be open to contributions to their package from the wider software community
  • Evalutate all contributions, and provide useful feedback to the contributors
  • Maintain a good architecture for the package
  • Act as feature sponsor in their package area
  • Manage package deliveries.

This is a huge task, so most package owners will rely on a network of approved committers and other supporters in order to carry out their role.

(Instead of “package owner”, the word “maintainer” is used with a similar meaning by some other open source projects.)

Over the next month, the nominated package owners (along with some of their line managers) are each attending one of three introductory workshops. Each workshop lasts two days. The goal of the workshop is to review and discuss how software development processes will alter, once the source code for the package is available to a much wider audience. Many processes will remain the same as before, but others will alter, and yet others will be brand new.

As I said, the first of these workshops has just finished. There were people from at least three different continents in attendance. I knew a handful before, but for many others, it was the first time for me to meet them. Without exception, they are singularly impressive individuals, with great CVs, and (in most cases) with glittering track records inside Nokia or Symbian.

Not surprisingly, the newly minted package owners brought a variety of different expectations to the event. Several already have considerable experience working with open source software. Others are, naturally, somewhat apprehensive about the changes.

A series of presenters covered matters such as:

  • An overview of the operation and architecture of the Symbian Foundation
  • Great software developers and open source principles
  • Tips on growing a successful community of external contributors
  • The importance of meritocracy
  • Tools and processes
  • IPR considerations, licensing issues, and legal aspects.

There were also small group breakout sessions on topics such as “What are the key challenges and issues facing package owners?” and “What are we going to do differently from before?”

What impressed me the most were the events on the first evening. After a dinner and optional sauna session, the participants gathered again in the seminar room, and spent another three hours reviewing ideas arising from the group breakout sessions from earlier in the day. The passion of the package owners stood out. In their own individual ways, they displayed a shared strong desire to explore new ways of engaging a wider community of software developers, without destabilising the mission-critical projects already being undertaken. These are all busy people, with multiple existing tasks, but they were ready to brainstorm ways to adopt new skills and processes in order to improve the development of their packages. (And I don’t think it was just the Lapin Kulta speaking.)

I half expected the fervour of the debate to die down after a while, but the buzz in the room seemed as strong at 10.50pm as at 8pm. There was a constant queue of people trying to get hold of the marker pen which had been designated (with limited success) as giving someone the right to speak to group. The workshop facilitator had to speak up forcefully to point out that the facilities would be locked shut in ten minutes.

With this kind of intelligence and fervour being brought to bear in support of the Symbian Foundation’s tasks, I’m looking forward to an exciting time ahead.

Blog at WordPress.com.