dw2

6 August 2008

Two fallacies on the value of software

Filed under: free software, Highlife entertainment, Moore's Law — David Wood @ 8:55 am

Software is everywhere. Unfortunately, buggy software is everywhere too.

I’m writing this en route to a family holiday in South America – four countries in 15 days. The holiday starts with a BA flight across the Atlantic. At first sight, the onboard “highlife” entertainment system is impressive. My son asks: do they really have all these music CDs and movies available? “Moore’s Law in action” was my complacent reply.

The first sign of trouble was when the flight attendant welcome announcement, along with the usual stuff about “if you sleep, please ensure your fastened seat belt is visible on top of your blanket“, contained a lengthy dire warning that no one should try to interact with the video screens in any way, while the system was going through its lengthy startup activity. Otherwise the system would be prone to freeze or some other malfunction.

It seems the warning was in vain. From my vantage point in the very back row of seats on the plane, as the flight progressed I could see lots of passengers calling over the flight attendants to point out problems with their individual systems. Films weren’t available, touchscreen interactions were random, etc. The attendants tried resetting individual screens, but then announced that, because so many screens were experiencing problems, the whole system would be restarted. And, by the way, it would take 30 minutes to reboot. All passengers would need to keep their hands off the screen throughout that period of time, even through many tempting buttons advertising features of the entertainment system would be displayed on the screen during that time.

One flight attendant forlornly tried to explain the situation to me: “it’s like when you’re starting up a computer, you have to wait until it’s completely ready before you can start using it”.Well, no. If software draws a button on the screen, it ought to cope with a user doing what comes naturally and pressing that button. That’s one of the very first rules of GUI architecture. In any case, what on earth is the entire system doing, taking 30 minutes to reboot?

To be fair, BA’s inflight entertainment system is hardly alone in having this kind of defect. I’ve often seen various bizarre technobollocks messages scrolling on screens on the back of aeroplane seats. I also remember a Lufthansa flight in which the software controlling the reclining chairs (I was flying business class on that occasion) was clearly faulty – it would freeze, and all subsequent attempts to adjust the chair position would be ignored. The flight attendants that day let me into the secret that holding down three of the buttons simultaneously for a couple of seconds would forcibly reboot the system. It was a useful piece of knowledge!

And to be fair, when the system does work, it’s great to have in-flight access to so much entertainment and information.

But I draw the following conclusion: Moore’s Law is not enough. Moore’s Law enables enormous amounts of data – and enormous amounts of software – to be stored on increasingly inexpensive storage mediums. But you need deep and wide-ranging skills in software creation if the resulting compex systems will actually meet the expectations of reasonable end users. Software development, when done right, is going to remain as high value add for the foreseeable future.

“Moore’s Law” is enough is the first fallacy on the value of software. Hot on its heels comes a second idea, equally fallacious:

The value of software is declining towards zero.

This second fallacy is wrapped up with a couple of ideas:

  1. The apparent belief of some people that all software ought to be sold free-of-charge
  2. The observation that the price of a fixed piece of software does tend to decline over time.

However, the second observation misses the important fact that the total amount of software is itself rapidly increasing – both in terms of bulk, and in terms of functionality and performance. Multiply one function which is slowly declining (the average price of a fixed piece of software) with another one that is booming (the total amount of all software) and you get an answer that refutes the claim that the value of software itself is declining towards zero.

Yes, it’s reasonable to expect that individual pieces of software (especially those that have stopped evolving, or which are evolving slowly) will tend to become sold for free. But as new software is made available, and as software keeps on being improved, there’s huge scope for value to be made, and for a portion of that value to be retained by first-rate developers.

Footnote: Even after the BA entertainment system restarted, there were still plenty of problems. Fast-forwarding through a film to try to get to the previous location was a very hit-and-miss affair: there was far too much latency in the system. The team responsible for this system should be hanging their heads in shame. But, alas, they’re in plenty of company.

3 August 2008

Human obstacles to audacious technical advances

Filed under: cryonics, flight, leadership, UKTA — David Wood @ 7:11 pm

[A] French noblewoman, a duchess in her eighties, …, on seeing the first ascent of Montgolfier’s balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. “Oh yes,” she said, “Now it’s certain. One day they’ll learn how to keep people alive forever, but I shall already be dead.”

Throughout history, individual humans have from time to time dared to dream that technological advances could free us from some of the limitations of our current existence. Fantastic tales of people soaring into the air, like birds, go back at least as far as Icarus. Fantastic tales of people with lifespans exceeding the biblical “three score years and ten” go back at least as far as, well, the Bible. The French noblewoman mentioned above, in a quote taken from Lewis Lapham’s 2003 Commencement speech at St. John’s College Annapolis, made the not implausible connection that technology’s progress in solving the first challenge was a sign that, in time, technology might solve the second challenge too.

Mike Darwin made the same connection at an utterly engrossing UKTA meeting this weekend. Since the age of 16 (he’s now 53), Mike has been trying to develop technological techniques to significantly lower the temperature of animal tissue, and then to warm up the tissue again so that it can resume its previous function. The idea, of course, is to enable the cryo-preservation of people who have terminal diseases (and who have nominally died of these diseases) until reviving them at such time in the future when science now has a cure for that disease.

Mike compared progress with the technology of cryonics to progress with the technology of powered manned flight. Renowned physicist Lord Kelvin had said as late as 1896 that “I do not have the smallest molecule of faith in aerial navigation other than ballooning“. Kelvin was not the only person with such a viewpoint. Even the Wright brothers themselves, after some disappointing setbacks in their experiments in 1901, “predicted that man will probably not fly in their lifetime“. There were a host of detailed, difficult engineering problems that needed to be solved, by painstakingly analysis. These included three kinds of balance and stability (roll, pitch, and yaw) as well as lift, power, and thrust. Perhaps it is no surprise that it was the Wright brothers, as accomplished bicycle engineers, that first sufficiently understood and solved this nexus of problems. Eventually, in 1903, they did manage one small powered flight, lasting just 12 seconds. Later that day, a flight lasted 59 seconds. That was enough to stimulate much more progress. Only 16 years later, John Alcock and Arthur Brown flew an airplane non-stop across the Atlantic. And the rest is history.

For this reason, Mike is particularly keen to demonstrate incremental progress with suspension and revival techniques. For example, there is the work done by Brian Wowk and Gregory Fahy and others on the vitrification and then reanimation of rabbit kidneys.

However, the majority of Mike’s remarks were on topics different from the technical feasibility of cryonics. He spoke for over two hours, and continued in a formal Q&A session for another 30 minutes. After that, informal discussion continued for at least another 45 minutes, at which time I had to make my excuses and leave (in order to keep my date to watch Dark Knight that evening). It was a tour-de-force. It’s hard to summarise such a lengthy passionate yet articulate presentation, but let me try:

  1. Cryonics is morally good
  2. Cryonics is technically feasible
  3. By 1968, Cryonics was a booming enterprise, with many conferences, journals, and TV appearances
  4. However, Cryonics has significantly failed in its ambitions
  5. Unless we understand the real reasons for these failures, we can’t realise the potential benefits of this program
  6. The failures primarily involve people issues rather than technical issues
  7. In any case, we should anticipate fierce opposition to cryonics, since it significantly disrupts many core elements of the way society currently operates.

The most poignant part was the description of the people issues during the history of cryonics:

  • People who had (shall we say) unclear ethical propriety (“con-men, frauds, and incompetents”)
  • People who failed to carry out the procedures they had designed – yet still told the world that they had followed the book (with the result that patients’ bodies suffered grievous damage during the cryopreservation process, or during subsequent storage)
  • People who were technically savvy and emotionally very committed yet who lacked sufficient professional and managerial acumen to run a larger organisation
  • People who lacked skills in raising and handling funding
  • People who lacked sufficient skills in market communications – they appeared as cranks rather than credible advocates.

This rang a lot of bells for me. The technology industry as a whole (including the smartphone industry) often struggles with similar issues. The individuals who initially come up with a great technical idea, and who are its first champions, are often not the people best placed to manage the later stages of development and implementation of that idea. The transition between early stage management and any subsequent phase is tough. But it is frequently essential. (And it may need to happen more than once!) You sometimes have to gently ease aside people (ideally at the same time finding a great new role for them) who are your personal friends, and who are deeply talented, but who are no longer the right people to lead a program through its next stage. Programs often grow faster than people do.

I don’t see any easy answers in general. I do agree with Mike on the following points:

  • A step-by-step process, with measurable feedback, is much preferable to reliance on (in essence) a future miracle that can undo big mistakes made by imprecise processes today(this is what Mike called “the fallacy of our friends in the future“);
  • Feedback on experiments is particularly important. If you monitor more data on what happens during the cryopreservation process, you’ll discover more quickly whether your assumptions are correct. Think again about the comparable experiences of the Wright brothers. Think also of the importance of carrying out retrospectives at regular intervals during a project;
  • Practice is essential. Otherwise it’s like learning to drive by just studying a book for six months, and then trying to drive all the way across the country the first time you sit in the drivers seat;
  • The quality of the key individuals in the organisations is of paramount importance, so that sufficient energies can be unleashed from the latent support both in the organisation and in wider society. Leadership matters greatly.

Footnote: I first came across the reference to the tale of the venerable French duchess in the commentary to Eliezer Yudkowsky’s evocative online reminiscences regarding the death of his 19-year old brother Yehuda Nattan Yudkowsky.

31 July 2008

Smartphone Show keynotes looking stronger than ever

Filed under: Smartphone Show — David Wood @ 11:19 pm

If you’ve been keeping your eye on the Symbian Smartphone Show website, you’ll have seen the plans for the keynote sessions taking shape over the last few weeks. The lineup looks particularly strong this year.

Day One (Tuesday 21st Oct) features:

  • Nigel Clifford, Symbian CEO, presenting on “Symbian – 10 years of innovation – the next wave: Symbian Foundation Vision”
  • Ho-Soo Lee, EVP of Mobile Solutions Center, Samsung
  • Rob Shaddock, Corporate VP of Motorola, presenting on “Innovating in an open mobile world”.

These individual keynotes will be followed by a panel session, “Symbian Foundation – setting the future of mobile software free“, with speakers from the Symbian Foundation board member companies.

Day Two (Wednesday 22nd Oct) features:

  • Kai Öistämö, EVP Devices, Nokia, presenting on “The future of smartphones”
  • Mats Lindoff, CTO of Sony Ericsson, presenting on “Sony Ericsson and the Symbian Foundation: Open to innovation and differentiation”
  • Benoit Schillings, CTO of Trolltech, presenting on “Symbian & Qt: the best of both worlds”.

Again, these individual presentations will be followed by a panel session, “Who will win the runtime race“:

As the consumer’s appetite for increasingly advanced mobile services grows, the decision of choosing which runtime environment to support these services becomes vitally important. With many different leading runtime environments hosted on Symbian OS, both the vendor and developer communities are keeping a close eye on which will emerge as the preferred environment.

The speakers on this second panel cover many of the key mobile runtime environments:

Of course, the keynotes are only one of many reasons to attend this show. For example, see here for the extended agenda for Day One, and here for the extended agenda for Day Two. And that only scratches the surface of the wider set of formal and informal activities that will take place.

It should be fascinating.

I’ve had the good fortune to be close to the heart of nearly all the major Psion and Symbian expo events, from 1992. The event in 2008 looks like it will top them all.

Footnote: There are likely to be more changes in the keynote lineup, during the whirlwind months in between now and the show itself. Check the official website for updates.

27 July 2008

Understanding Open Source Licensing

Filed under: CPL, EPL, GPL, Open Source, OSiM, OSL — David Wood @ 8:18 pm

“What’s the best book to read for an introduction to Open Source?”

I’ve already given one set of answers to this question, in my article, “Clear thinking about open source“. One reply to that article – from Joel West, a writer and researcher on Open Innovation and Open Source whose advice I value – urged me to include one more book in my reading list: Lawrence Rosen’s “Open Source Licensing: software freedom and intellectual property law“. This weekend I’ve finished reading it. And indeed, I do now endorse it as being clearly written yet also highly insightful.

Initally, I tended to shy away from this book, instead preferring the book by Heather Meeker that I covered in my earlier article. Both books focus on open source licensing issues, but Meeker’s was published this year, whereas Rosen’s dates from 2004. So Rosen’s book makes no mention of GPL v3, or Sun’s experience with open-sourcing Java, or even the Eclipse Public License (EPL) which the Symbian Foundation is likely to adopt. That makes Rosen’s book appear out of date. However, I realised that one license which the book does cover (comprehensively) is the Common Public License (CPL) which is the precursor of the EPL and which differs from the EPL in very few places. Reassured, I dipped into the book – and then could hardly put it down.

In summary, I now recommend both the Meeker book and the Rosen book for their coverage of open source licensing. They complement each other nicely. There’s a bit of overlap, but also lots of good material in each book that you won’t find in the other.

Specifically, here are a few of the “aha”s or other learnings I took away from Rosen’s book:

1.) The ten principles of the Open Source Definition are actually quite hard to understand in places (this comment came as a relief to me, since I had been thinking the same thing).

2.) Patents and Copyrights should be approached as parallel sets of legal principle – the former applicable to ideas, and the latter to expressions of ideas. That’s a far better approach than initially just thinking about Copyrights, and then trying to squeeze in considerations about Patents at the end.

3.) One of the key differences between different open source licenses is in the treatment of patent licenses – and in the different circumstances in which patent licenses (and/or copyright licenses) can be withdrawn in the wake of various kinds of patent infringement suits. There’s a tricky balance that has to be drawn between the needs of both licensor and licensee concerning the continuing value of their respective patent portfolios.

4.) One piece of license evolution covered in the book – the difference between v2.0 and v2.1 of the Open Software License (OSL) – closely mirrors the principal difference between the CPL and the EPL: it’s a reduction in the circumstances in which a patent license can be withdrawn when a licensee brings a separate patent infringement case against the licensor.

5.) The insistence in GPL v2 about not being compatible with other licenses that introduce additional restrictions (even restrictions that the initial drafters of GPL v2 had not considered), is a real drawback of that license, since it unnecessarily hinders aggregation of code written under similar but different licenses. (Possible restrictions that have emerged more recently include provisions for defence against patent infringement lawsuits or to protect the licensor’s trademarks.)

6.) “… sections of the LGPL are an inpenetrable maze of technological babble. They should not be in a general purpose software license.” (page 124)

7.) Disclaimers of liability that are generally written into open source licenses may be overridden by general consumer legislation. Recognising this, the CPL (and hence the EPL) introduces a clause that allocates particular responsibility to “commercial contributors” to defend and indemnify all other contributors against losses, damages, or costs.

8.) One possible way for a company to make money from software is via the mechanism Rosen calls “Eventual Source”: code is released as open source after some delay period, but recipients can elect to pay an early access license fee to be able to work with the code (under a non-open source license) ahead of its release as open source.

I’ve still got lots of questions about open source licensing (for example, about the prospects for wider adoption of GPL v3, and about how successful Rosen’s own preferred OSL is likely to be in the longer run). I’ll be attending the Open Source in Mobile conference in Berlin in September, when I hope to find out more answers! (And no doubt there will be new questions too…)

26 July 2008

Naming the passion killers

Filed under: developer experience, fun, open phones, Symbian Signed — David Wood @ 6:06 pm

Passion makes a big difference. Posters all over Symbian premises (and on our websites) boldly declare that we “are at our best when we… love working for Symbian, drive to succeed, believe in ourselves, and take pride in what we do…

That’s the Symbian description of the practical importance of passion. Along with people, collaboration, integrity, collaboration, and excellence, passion is one of Symbian’s six declared corporate values.

Like many other companies, Symbian each year carries out an internal employee satisfaction survey. The survey is conducted by an external agency, who provide us with information on how our results compare with broadly similar surveys held by other high-tech companies. In the most recent survey, aggregate Symbian employee views demonstrated strong Passion (80% positive rating). Of the six values, this one had the strongest support of all. The score also came in notably higher than the benchmark. In general, our employees enjoy working here, and put their hearts into their activities.

In some ways, “passion” is a longer word for “fun”. The good news is that, on the whole, Symbian employees enjoy and value their work. The bad news, however, is as I covered in my previous blog posting, “Symbian, just for fun“: many developers outside the company have a less positive feeling about working with Symbian OS software. They may persevere with writing Symbian OS software because their employer pays them to do so, and because of the somewhat attractive prospect of a share in a growing 200M+ unit market, but they often lack the kind of inner motivation and satisfaction that can put them into a super-productive state of “flow“.

The encouraging responses I’ve received to that posting (both via email and online) stengthen my view that it’s vitally important to identify understand the inhibitors to developer flow – the killers of Symbian passion. That’s a big topic, and I suspect I’ll be writing lots more on this topic in the months ahead. But let’s make a start.

Lack of clarity with Symbian Signed

The experience of my correspondent ilgaz is probably quite common:

I think the issue here is , we (even technical users) don’t really get what should be signed, what shouldn’t.

Ilgaz wanted to use a particular third party application (Y-Tasks by Dr Jukka), and thought that it would first need to be signed with a developer certificate. That proved to be an awkward process. However, it turns out that the application is ready to use (for many purposes) without any additional signing. So the attempt to get a developer certificate was unnecessary.

Some might say that Symbian Signed itself is intrinsically a passion killer. I disagree – as I’ve argued elsewhere. But what does kill passion here is the confusion about the rules for Symbian Signed. You can’t expect flow from confusion. I see six causes for this confusion:

  1. Different devices implement Symbian Signed in different ways. Some devices helpfully support a setting to allow the installation of self-signed apps, as well as Symbian Signed ones. Others do not;
  2. Different operators have different views about what kinds of applications they want to allow on their phones;
  3. The subject of permissions for the different capabilities of different pieces of software is intrinsically complex;
  4. The operation of Symbian Signed has changed over time. It’s great that it has improved, but some people still remember how it used to work, and that confuses them;
  5. “Once bitten, twice shy”: past bad experiences sometimes over-colour present views on the topic;
  6. A small number of people seem to be motivated to spread particularly bad vibes about Symbian Signed.
In this situation, we can’t expect to reverse all the accumulated mistrust and apprehension overnight. But the following steps should help:
  • Continue to seek to improve the clarity of communications;
  • Be alert to implementation issues (eg an overworked website – as experienced some months back) and seek to address them quickly;
  • Avoid a divergence of implementations of different application approval schemes by different network operators.
It’s my profound hope that the attractive statements of common aims of openness, made by the various parties supporting the Symbian Foundation, will translate into a unity of approaches towards application approval schemes.

Lack of reprogrammable devices

Another correspondent, puterman, points out:

Getting people to develop apps just for fun is one thing, but getting them to hack the actual OS is another thing. For that to be of interest, there have to be open devices available, so that the developers can actually see their code running.

I agree with the importance of quick feedback to changes made in your software. If you change the lower levels of the software, you’ll need to be able to re-program an actual device.

The Linux community shows the way here, with the Trolltech Greenphone and the FIC OpenMoko Neo1973 and FreeRunner devices. It’s true that there have been issues with these devices. For example, Trolltech eventually discontinued the Greenphone, and the FIC devices have proved quite hard to purchase. However, as the Symbian Foundation software becomes increasingly open source, we can reasonably expect the stage-by-stage appearance of phones that are increasingly end-user re-programmable.

Lack of well-documented API support for “interesting” features of a phone

Marcus Groeber makes a series of insightful points. For example,

One of the main things mobile developers would want to do is make use of the unique features of a mobile phone (connectivity, built in camera, physical interaction with the user). However, it is those area where documentation is still most patchy and API support is erratic (CCameraAdvancedSettings anyone?).

In my view, this aspect of mobilie development should be acknowledged to a much greater degree, and the documentation efforts focused accordingly: If there is a feature in a built-in app of the phone, chances are that a developer will want to try and improve on that. Can s/he?…

I believe that these moments of frustration – finding an API that looks useful in the SDK docs, then spending an evening writing an application that uses it, only to get KErrNotSupported in the end – is probably among the chief reasons for people abandoning their pet projects…

True, many “fun” programmers (me included) don’t want to wade through tons of documentation and whitepapers before writing their first proof-of-concept – but to me this makes it even more important that the existing documentation is streamlined, accurate and compact.

Improving our developer documentation remains one of the top-priority goals at Symbian. In parallel, we’re hoping that additional publications from Symbian Press (and others) will help to guide developers more quickly through the potential minefields of APIs for the more interesting functionality. The book “Quick Recipes on Symbian OS” (which I mentioned at the end of an earlier posting, “Mobile development in a hurry“) is intended to address this audience.

Of course, as Simon Judge points out, sometimes it’s not a matter of improving the documentation of existing APIs. Sometimes, what’s required is to improve the APIs themselves.

API awkwardness across the UI-OS boundary

The last passion-killer I’ll mention for now is another one raised by Marcus Groeber:

most of the “interesting” bits of developing for devices actually come from the licensee’s layers of API (in my case, mostly S60), and I believe it is here where there is most work to be done, as well as the interface between the two

The ad-hoc-ish nature of the S60 UI, which seems to require a lot of experimenting and guesswork for developing even very simple screen layouts that mimic closely what is already present in the phone in dozens of places. Even after years of development, I still consider the CAkn and CEik listbox classes a jungle.

As one of the original designers of the CEik listbox class hierarchy (circa 1995-6) perhaps I should keep my head low at this point! (Though I can claim little direct credit – or blame – for the subsequent evolution of these classes.)

However, the bigger point is the following: both Symbian and S60 have recognised for many years that the separation of the two software development teams into two distinct companies has imposed drawbacks on the overall design and implementation of the APIs of functionality that straddles the two domains. Keeping the UI and the OS separate had some positives, but a lot of negatives too. Assuming the acquisition by Nokia of Symbian receives regulatory approval, the resulting combined engineering teams should enable considerably improved co-design. The new APIs will, hopefully, inspire greater fascination and approval from those who use them!

23 July 2008

Symbian, just for fun

Filed under: CIX, developer experience, fun, OPL, Python — David Wood @ 9:21 pm

“There are two kinds of OSS developers: the guys who do things for fun, and the guys who do OSS because they are paid to do so. In order for an open source project to really flourish and take over the world, you need both.”

These comments were made a few days ago by Janne Jalkanen of Nokia, speaking in a personal capacity. I think Janne is competely right. My own view is that the only reliable way for the Symbian Foundation software to become the most widely used software platform on the planet, is if that software also becomes the most widely liked software platform on the planet.

The two kinds of OSS developers aren’t completely distinct. Ideally the ones who are paid by their company to work on the software should also have a strong inner desire to do that work – to go the extra mile out of the sheer enjoyment and fascination they get from that software.

I’ve seen that kind of deep enthusiasm for software many times in my life. I first ventured onto online community discussion groups in the early 1990s, using the login name “dw2” on the CIX (Compulink Information eXchange) bulletin boards. The Psion devices of that time – running a 16-bit precursor to Symbian OS – could be programmed using an interpreted language called OPL. Hobbyists made increasingly creative use of the possibilities of that language, creating some highly impressive games, serviceable business applications, alternative personal information management functionality, and lots more besides. I was drawn into providing support and encouragement to this burgeoning community. Plucking an example at random from September 1992 from my archives, here’s a reply I posted to someone who had been pushing the envelope of OPL functionality:

Access to C routines from Opl

I don’t suppose it’ll cause any harm to pre-announce something that Psion will shortly make available to Series3 Opl programmers. Namely a mechanism to access functionality written in a C library, from Opl. What will be possible is as follows:

  • Someone provides some C functionality in a so-called DYL library
  • Opl programs can hook into this functionality by means of the LibSend operating system service (CALL ($cf)).

Psion will make some suitable DYLs available, and it will be up to third parties to provide other general or specific DYLs. For example, in a hypothetical company writing software for the S3, out of a team of say six programmers, only one would need to understand C. All routine coding could be done in Opl, with only the performance-critical parts being done in C (together with a few parts that are technically out of the reach of pure Opl).

Even before you (BobG) raised this subject, Psion were working on a specific DYL to quicksort the index of a DBF file.

Regards, DavidW

That was 1992, when many enthusiasts were happy to while away their free time programming devices powered by EPOC16. Fast forward again to 2008. Janne goes on to say,

The problem with Symbian is that very, very few people touch it for fun. So I believe that while we can open source it, it is going to be very difficult to get people participate out of their own free will, unless we are prepared to make very serious refactorings to the entire system.

My first instinct is to disagree with Janne here. I’d love to list lots of people I know who do seem to enjoy developing Symbian software, “just for fun”. For example,

  • Python on S60 can be a real joy to use – and supports lots of extensions. (In many ways, Python is for Symbian OS in 2008 what OPL was for EPOC16 back in the 1990s.)
  • The forthcoming new Symbian graphics architecture (“ScreenPlay“) and IP networking architecture (“FreeWay“) are full of interesting software development opportunities
  • The PIPS libraries hide away many of the idiosyncracies of native Symbian C++ development, and can increase the pleasure of porting certain types of applications to Symbian devices.

However, as Mike Rowehl rightly reminds all would-be Symbian blogging enthusiasts – like me! – the first duty of a blogger is to listen, rather than to speak:

I’m not saying that Nokia doesn’t have market share, I’m saying they don’t have developer mindshare and they haven’t captured the attention of new entrants. How often do you hear about people “fooling around with developing for Symbian” just for fun in their free time? I’ve attended developer focused events in a number of different areas and I’ve heard that very infrequently. Compare that to the number of times you run across people fooling around with iPhone or Android SDKs (or even Maemo for that matter). I’m filtering out all the Silicon Valley events cause we’re weird over here. But even of events in others areas – developers area paying way more attention to the other platforms. You can argue that all you want but it won’t go away, I’m just telling you what I hear. Do with it what you want. If you want to deny it though, you’ve already lost really.

And I can’t deny that, as I search through the blogosphere and developer forums, I find the number of postings that are negative about the the developer experience of Symbian and S60 kits significantly exceeds those that express heart-felt enjoyment with the experience. As much as I can find reasons to discount individual postings, I can’t discount the overall weight of comments by such a diverse group of writers.

So all I can say is the following:

  • I see lots of API improvement projects inside the Symbian labs – such as the experimental forthcoming ZString class alternative to text descriptors, and the proposed RAll utility classes for simplified resource management – which should be warmly received by a wide audience
  • I believe Symbian’s developer tools and documentation have improved significantly over the last few years, and are continuing to make big leaps forward (but the impressions some developers hold towards these topics is unduly negatively coloured by their past bad experiences with older tools or documentation)
  • A more transparent approach to planning and experimentation inside Symbian’s development halls – as befits a switch to open source development – will generate more good ideas (and even some good will…)
  • Experimentation and quick starts on Symbian development projects will become easier.

(I also believe, by the way, that developers’ enthusiasm for their experience on other platforms will decline, unless these other platforms learn to cope with some hard disciplines like binary compatibility and SDK quality control, as their market success grows. For related comments, see “The emperor’s new handset“.)

I close by making a commitment: improved developer experience will be central to the goals of the Symbian Foundation. If the number of people who develop for Symbian “just for fun” doesn’t increase substantially, the Foundation will have failed in its objectives.

20 July 2008

Rationally considering the end of the world

Filed under: bias, prediction markets, risks — David Wood @ 8:38 pm

My day job at Symbian is, in effect, to ensure that my colleagues in the management team don’t waken up to some surprising news one morning and say, “Why didn’t we see this coming?“. That is, I have to anticipate so-called “Predictable surprises“. Drawing on insight from both inside and outside of the company, I try to keep my eye on emerging disruptive trends in technology, markets, and society, in case these trends have the potential to reach some kind of tipping point that will significantly impact Symbian’s success (for good, or for ill). And once I’ve reached the view that a particular trend deserves closer attention, it’s my job to ensure that the company does devote sufficient energy to it – in sufficient time to avoid being “taken by surprise”.

For the last few days, I’ve pursued my interest in disruptive trends some way outside the field of smartphones. I booked a holiday from work in order to attend the conference on Global Catastrophic Risks that’s been held at Oxford University’s James Martin 21st Century School.

Instead of just thinking about trends that could destabilise smartphone technology and smatphone markets, I’ve been immersed in discussions about trends that could destabilise human technology and markets as a whole – perhaps even to the extent of ending human civilisation. As well as the more “obvious” global catastrophic risks like nuclear war, nuclear terrorism, global pandemics, and runaway climate change, the conference also discussed threats from meteor and comet impacts, gamma ray bursts, bioterrorism, nanoscale manufacturing, and super-AI.

Interesting (and unnerving) as these individual discussions were, what was even more thought-provoking was the discussion on general obstacles to clear-thinking about these risks. We all suffer from biases in our thinking, that operate at both individual and group levels. These biases can kick into overdrive when we begin to comtemplate global catastrophes. No wonder some people get really hot and bothered when these topics are discussed, or else suffer strong embarrassment and seek to change the topic. Eliezer Yudkowsky considered one set of biases in his presentation “Rationally considering the end of the world“. James Hughes covered another set in “Avoiding Millennialist Cognitive Biases“, as did Jonathan Wiener in “The Tragedy of the Uncommons” and Steve Rayner in “Culture and the Credibility of Catastrophe“. There were also practical examples of how people (and corporations) often misjudge risks, in both “Insurance and catastrophes” by Peter Taylor and “Probing the Improbable. Methodological Challenges for Risks with Low Probabilities and High Stakes” by Toby Order and co-workers.

So what can we do, to set aside biases and get a better handle on the evaluation and prioritisation of these existential risks? Perhaps the most innovative suggestion came in the presentation by Robin Hanson, “Catastrophe, Social Collapse, and Human Extinction“. Robin is one of the pioneers of the notion of “Prediction markets“, so perhaps it is no surprise that he floated the idea of markets in tickets to safe refuges where occupants would have a chance of escaping particular global catastrophes. Some audience members appeared to find the idea distasteful, asking “How can you gamble on mass death?” and “Isn’t it unjust to exclude other people from the refuge?” But the idea is that these markets would allow a Wisdom of Crowds effect to signal to observers which existential risks were growing in danger. I suspect the idea of these tickets to safe refuges will prove impractical, but anything that will help us to escape from our collective biases on these literally earth-shattering topics will be welcome.

(Aside: Robin and Eliezer jointly run a fast throughput blog called “Overcoming bias” that is dedicated to the question “How can we obtain beliefs closer to reality?”)

Robin’s talk also contained the memorable image that the problem with slipping on a staircase isn’t that of falling down one step, but of initiating an escalation effect of tumbling down the whole staircase. Likewise, the biggest consequences of the risks covered in the conference aren’t that they will occur in isolation, but that they might trigger a series of inter-related collapses. On a connected point, Peter Taylor mentioned that the worldwide re-insurance industry would have collapsed altogether if a New Orleans scale weather-induced disaster had followed hot on the heels of the 9-11 tragedies – the system would have had no time to recover. It was a sobering reminder of the potential fragility of much of what we take for granted.

Footnote: For other coverage of this conference, see Ronald Bailey’s comments in Reason. There’s also a 500+ page book co-edited by Nick Bostrom and Milan Cirkovic that contains chapter versions of many of the presentations from the conference (plus some additional material).

17 July 2008

Mobile development in a hurry

Filed under: Mobile Monday, mobile web, Symbian Press — David Wood @ 12:18 pm

“Google Mobile are moving all development away from downloadable apps to the mobile web”

That’s a message mjelly records Charles Wiles, product manager for Google Gears for mobile, as making at this week’s MoMo London event.

I was at the same event. I’m not sure I remember hearing quite such an emphatic message as mjelly reports, but I do remember hearing the following:

  • Eric Schmidt (Google CEO) has been asking the Google Mobile team why they only make one app release every six months, whereas development of apps for PC web-browser happens much more quickly
  • Downloadable apps for mobile devices are fraught with problems – including BIG issues with device fragmentation
  • Taking Google Maps for mobile as an example: there are 10+ platforms to support, requiring 100’s of builds in total – it all adds up to PAIN
  • There must be a better way!
  • The better way is to deliver services through the mobile web, instead of via downloadable applications.

I’ve heard this kind of message at previous MoMo London events, from lots of different speakers. Downloadable applications (whether written in native C++ for in Java) introduce lots of problems with development, deployment, and usability, whereas mobile web apps are a whole world simpler. The message that comes across is: If you want rapid development that in turn allows rapid innovation, stick with the mobile web. It’s not a message I’ve enjoyed hearing, but I can’t deny that lots of speakers have said it (in various different ways).

But what made the presentation from Charles Wiles all the more interesting was that, after highlighting difficulties facing downloadable mobile apps, he was equally critical of mobile web applications (which run inside a web browser environment on the device):

  • Mobile web apps suck too!
  • Javascript takes time to execute on mobile devices, and since it’s single threaded, it blocks the UI
  • There’s often high network latency
  • The mobile web apps lack access to location, the address book, and camera, etc.

It’s for this kind of reason that Google has continued to release downloadable versions of their most popular applications. (Incidentally, pride of place on the Quick Access bar on my Nokia E61i idlescreen are the native C++ versions of Google Search and Google Maps. They’re in that pole position because I find them both incredibly useful.)

It’s also for this kind of reason that Apple’s initial message about how to develop apps for the iPhone – that developers should just write web applications – was so poorly received. Would-be iPhone developers strongly suspected they could achieve better results, in many cases, by writing downloadable apps. This expectation has been vindicated by the heady events around the recent launch of the iPhone application store.

Four challenges facing mobile web apps

The four factors I generally highlight as limitations in mobile web applications vs. downloaded apps are:

  1. The UI provided by a web browser is general purpose, and is often sub-optimal for a more complex application on the small screen of a mobile device (an example of the unsuitedness of the web browser UI in general is when users are confronted with messages such as “Don’t press the Back button now!” or “Only press the OK button once!”)
  2. Applications need to be able to operate when they are disconnected from the network – as in an airplane or during a trip in an Olde World London underground train – or whenever reception is flaky. On a mobile device, the user experience of intermittently connected “push email” from the likes of BlackBerry is far more pleasant than an “always connected web browser” interface to server-side email
  3. Web applications suffer from lack of access to much of the more “interesting” functionality on the phone
  4. Web applications are often more sluggish than their downloaded equivalents.

Exploring two routes to improved mobile apps

So what is the best answer? Improve native mobile app development or improve mobile web app development? Unsurprisingly, the industy is exploring both routes.

To improve mobile web app development:

Each of these initiatives (and I could have mentioned quite a few more) is significant, and each deserves wide support. Each of them also faces complications – for example, the more AJAX is included in a web application (addressing problem #1 of the four I listed above), the more sluggishly that application tends to run (exacerbating problem #4). And as web applications gain more access to underlying rich phone functionality, complex issues of security and application validation rear their heads again. I doubt if any of these complications are fatal, but they reinforce the argument for the industry also looking, in parallel, at initiatives to improve native mobile app development.

To improve native mobile app development, Symbian has been putting considerable effort over the last few years into improved developer tools, developer documentation, APIs, and so on. The results are encouraging, but the job is far from done.

Quick recipes on Symbian OS

One of the disincentives to doing native application development on Symbian phones is the learning curve that developers need to climb, as they become familiar with various programming idioms. That’s a topic that Kari Pulli (Nokia Research Fellow) discussed with me when he visited Symbian HQ back in Fall 2006. Kari had in mind the needs of people (especially in universities) who were already good C++ developers, but who don’t have a lot of spare time or inclination to learn brand new programming techniques.

We brainstormed possible titles for a new Symbian Press book specifically targeted at this important developer segment:

  • “Symbian progamming in a hurry”?
  • “Hacking Symbian OS”?

In the months that followed, this idea bounced around inside Symbian, and gathered more and more support. The title changed in the process, to the more ‘respectable’ “Quick Recipes on Symbian OS”. Michael Aubert stepped forwards as the lead author – you can read an interview with him on the Symbian Developer Network. Happily, the book went on sale last month. For my hopes for the book, I append a copy of the foreword I wrote for the book:

This book has been designed for people who are in a hurry.

Perhaps you are a developer who has been asked to port some software, initially written for another operating system (such as may run on a desktop computer), to Symbian OS. Or perhaps you have to investigate whether Symbian OS could be suited to an idea from a designer friend of yours. But the trouble is, you don’t have much time, and you have heard that Symbian OS is a sophisticated and rich software system with a considerable learning curve.

If you are like the majority of software engineers, you would like to take some time to investigate this kind of task. You might prefer to attend a training course, or work your way through some of the comprehensive reference material that already exists for Symbian OS. However, I guess that you don’t have the luxury of doing that – because you are facing tight schedule pressures. There isn’t sufficient slack in your schedule to research options as widely as you’d like. Your manager is expecting your report by the end of the week. So you need answers in a hurry.

That’s why Symbian Press commissioned the book you are now holding in your hands. We are assuming that you are a bright, savvy, experienced software developer, who’s already familiar with C++ and with modern software programming methods and idioms. You are willing to work hard and can learn fast. You are ready to take things on trust for a while, provided you can quickly find out how to perform various tasks within Symbian OS. Over time, you would like to learn more about the background and deeper principles behind Symbian OS, but that will have to wait – since at the moment, you’re looking for quick recipes.

Congratulations, you’ve found them!

In the pages ahead, you’ll find recipes covering topics such as Bluetooth, networking, location based services, multimedia, telephony, file handling, personal information management – and much more. In most recipes, we provide working code fragments that you should be able to copy and paste directly into your own programs, and we provide a full set of sample code for download from the book’s website. We have also listed some common gotchas, so you can steer clear of these potential pitfalls.

Since you are in a hurry, I will stop writing now (even though there is lots more I would like to discuss with you), so that you can proceed at full pace into the material in the following pages. Good speed!

14 July 2008

MoMo London: the momentum continues

Filed under: Location, Mobile Monday — David Wood @ 11:11 pm

Mobile Monday is a worldwide phenomenon, with chapters in more than 60 cities. Typically, chapters hold one meeting most months, usually on the first (or second) Monday – though some smaller groups meet less frequently. I hear that the London chapter is among the liveliest.

Tonight, Mobile Monday London held its thirtieth speaker meeting. Checking back through my Series 5mx Agenda, I counted that I’ve attended 18 out of the 30, going back to my first attendance in December 2005. The reasons I keep returning to these events are:

  1. The networking opportunities are first class: all sorts of developers, entrepreneurs, VCs, project managers etc attend, from both large and small companies (including independent contractors)
  2. The presentations (which are deliberately kept short) and the demos that follow (which are kept even shorter) often convey new insight about the cutting edge of the mobile industry
  3. Disruptive yet throughtful questions are asked by highly knowledgeable audience members who have in many cases already personally been through a couple of business cycles, in different companies, experiencing the reality of technical ideas and business models similar to those being advocated by the presenters.

The quality of the Q&A alone often makes these meetings considerably more interesting and useful than some industry conferences which come with hefty price tags. That’s the benefit of the collectively highly experienced MoMo London community.

The topic for this evening was “Enabling Location in Applications”. The audience was enormous – being swelled, first by some members of the W3C who are attending a working meeting in London, and second by visiting members from overseas MoMo chapters (Germany, Estonia, Sweden, Spain, Boston, Italy, and New York, among others) who were in town to discuss the future international setup of the organisation. This was on top of the very sizeable more local audience.

All seven of the presentations / demos included interesting comments. Here’s a few points that caught my attention:

  • Skyhook Wireless (who were the sponsors for this particular event) have a database of the locations of over 50 million wireless access points, including 16M+ in Europe alone. This database grows as the result of the records made by 500 drivers worldwide, include 200 in Europe (who have already driven some 750,000 km)
  • A (non-mobile phone) application of the Skyhook technology is explained by David Pogue in this video: the Eye Fi system of automatically geo-tagging photos taken by your digital camera, without involving any GPS receiver
  • Another partner of Skyhook is Trapster, who have an app for mobile phones that allows drivers to provide real-time alerts to one another about speed traps in the area
  • Google Gears provides a Geolocation API, which in turn could provide much of the basis of a similar API in HTML5; that’s a reminder that (as stressed by the Google speaker, Charles Wiles) “Google Gears is much more than offline”
  • The demos and screenshots tended to show either the Nokia N95 or the iPhone; Andrew Scott of Rummble cheekily remarked that “It will take a long time before everyone has an iPhone – maybe two years”
  • Andrew touched on another sensitive point with a follow-up remark: “Mobile Network Operators are probably never going to waken up and realise that they shouldn’t be charging for location information”
  • Both Andrew and Justin Davis of NinetyTen emphasised that mobile search and recommendations needed to be filtered, to give more prominence to entries that had been favourably reviewed by trusted contacts of the user
  • Uniquely of all the speakers, Mark White of Locatrix (who said he had flown all the way from Brisbane Australia to speak at this event) spent more time reviewing business model issues. “‘Can do’ doesn’t mean ‘can make money’“, he emphasised
  • During the Q&A, the panel suggested it was only a matter of time before a free access API would be available, allowing applications to query central databases to find out the location of a cell with a given ID; any new startups who are working on providing this service wouild therefore be well advised to stop this at once.

Because the room was so full and was becoming pretty warm, the Q&A was stopped before it got into full gear, which was a bit of a pity. But lots of lively conversation continued in the reception area afterwards, over drinks.

To my mind, the energy and upbeat attitude of the meeting is testimony to:

  • The overall health of the mobile industry in and around London
  • The ever greater role of location elements in mobile applications.

I’ll end by echoing the closing words of Mark White: “This is not the LBS industry of 2000. It’s better”. Users have learned about the general benefits of GPS and positioning from car-based satnav systems, and are now increasingly looking for similar benefits from their mobile phones.

13 July 2008

A picture is worth a thousand words: Enterprise Agile

Filed under: Agile, communications, waterfall — David Wood @ 8:44 pm

Communications via words often isn’t enough. You generally need pictures too.

For example, in seeking to explain to people about the merits of Agile over more traditional, “plan-based” software development methods, I’ve often found excerpts from the following sequence of pictures to be useful:











The last two pictures in this series are an attempt to show how Agile can be applied in multiple layers in the more complex environment of large-scale (“enterprise-scale”) software projects. Of course, it’s particularly challenging to gain the benefits of Agile in these larger environments.

I drew these diagrams (almost exactly 12 months ago) after having read fairly widely in the Agile literature. So these diagrams draw upon the insights of many Agile advocates. Someone who influenced me more than most was Dean Leffingwell, author of the easy-to-read yet full-of-substance book “Scaling Software Agility: Best practices for large enterprises” that I’ve already mentioned in this blog. I’d also like to highlight the “How to be Agile without being Extreme” course developed and delivered by Construx as being particularly helpful for Symbian.

Dean has carried out occasional training and consulting engagements for Symbian over the last twelve months. One outcome of this continuing dialog is an impressive new picture, which tackles many issues that are omitted by simpler pictures about Agile. The picture is now available on Dean’s blog:

If the picture intrigues you, I suggest you pay close attention to the next few posts that Dean makes, where he promises to provide annotations to the different elements. This could be the picture that generates many thousands of deeply insightful words…

Footnote: I’ve long held that Open Source is no panacea for complex software projects. If you aren’t world class in software development skills such as compatibility management, system architecture review, modular design, overnight builds, peer reviews, and systematic and extensive regression testing, then Open Source won’t magically allow you to compete with companies that do have these skillsets. One more item to add to this list of necessary skills is enterprise-scale agile. (Did I call it “one more item”? Scratch that – there are many skills involved, under this one label.)

« Newer PostsOlder Posts »

Blog at WordPress.com.