dw2

11 July 2008

Into the long, deep, deep cold

Filed under: cryonics, Methuselah, UKTA — David Wood @ 9:11 pm

My interest in smartphones stems from my frequent observation and profound conviction that these devices can make their human users smarter: more knowledgeable, more connected, and more in control. It’s an example of the careful use of technology to make users that are, in some sense, better humans. Technology – including the wheel, the plough, the abacus, the telescope, the watch, the book, the steam engine, the Internet, and (of course) much more besides – has been making humans “better” (stronger, fitter, and cleverer) since the dawn of history. What’s different in our age is that the rate of potential improvement has accelerated so dramatically.

The website “Better Humans” often has interesting articles on this theme of accelerating real-world uses of technology to enhance human ability and experience. This morning my attention was taken by some new articles there with an unusual approach to the touchy subject of cryonics. For example, the article “Cryonics: Using low temperatures to care for the critically ill” starts by quoting the cryobiologist Brian Wowk:

“Ethically, what is the correct thing to do when medicine encounters a difficult problem? Stablize the patient until a solution can be found? Or throw people away like garbage? Centuries from now, historians may marvel at the shortsightedness and rationalizations used to sanction the unnecessary death of millions.”

The article (originally from a site with a frankly less-than-inspiring name, Depressed Metabolism) continues as follows:

In contemporary medicine terminally ill patients can be declared legally dead using two different criteria: whole brain death or cardiorespiratory arrest. Although many people would agree that a human being without any functional brain activity, or even without higher brain function, has ceased to exist as a person, not many people realize that most patients who are currently declared legally dead by cardiorespiratory criteria have not yet died as a person. Or to use conventional biomedical language, although the organism has ceased to exist as a functional, integrated whole, the neuroanatomy of the person is still intact when a patient is declared legally dead using cardiorespiratory criteria.

It might seem odd that contemporary medicine allows deliberate destruction of the properties that make us uniquely human (our capacity for consciousness) unless one considers the significant challenge of keeping a brain alive in a body that has ceased to function as an integrated whole. But what if we could put the brain “on pause” until a time when medical science has become advanced enough to treat the rest of the body, reverse aging, and restore the patient to health?

Putting the brain on pause is not as far fetched as it seems. The brain of a patient undergoing general anesthesia has ceased being conscious. But because we know that the brain that represents the person is still there in a viable body, we do not think of such a person as “temporarily dead.”

One step further than general anesthesia is hypothermic circulatory arrest. Some medical procedures, such as complicated neurosurgical interventions, require not only cessation of consciousness but also complete cessation of blood flow to the brain. In these cases the temperature of the patient is lowered to such a degree (≈16 degrees Celsius) that the brain can tolerate a period without any circulation at all. Considering the fact that parts of the human brain can become irreversibly injured after no more than five minutes without oxygen, the ability of the brain to survive for at least an hour at these temperatures without any oxygen is quite remarkable.

And so it continues. See also, by the same author, “Why is cryonics so unpopular?

Is it really conceivable that the human body (or perhaps just the human head) could be placed into deep, deep cold, potentially for decades, and then subsequently revived and repaired, using the substantially improved technology of the future? Never mind conceivable, is it desirable?

I’m reminded of a book that made a big impression on me, several years ago – the provocatively titled “The first immortal” by James Halperin. It’s written as fiction, but it’s intended to describe a plausible future scenario. I understand that the author did a great deal of research into the technology of cryonics, in order to make the account scientifically credible.

As a work of fiction, it’s no great shakes. The characterisation, the plotting, and the language is often laboured – sometimes even embarrassing. But the central themes of the book are tremendously well done. As a reader, you get to think lots of new thoughts, and appreciate the jaw-dropping ups and downs that cryonics might make possible. (By the way, some of the ideas and episodes in the book are very vivid indeed, and remain clearly in my mind now, quite a few years after I read the book.) As the various characters in the book change their attitudes towards the possibility and desirability of cryonic preservation and restoration, it’s hard not to find your own attitude changing too.

Footnote: Aubrey de Grey, one of the speakers at tomorrow’s UKTA meeting (“How to live longer and longer yet healthier and healthier: realistic grounds for hope?“), has put on public record the fact that he has signed up for cryopreservation. See here for some characteristically no-nonsense statements from Aubrey himself on this topic.

10 July 2008

Inspiring the rising stars in universities

Filed under: collaboration, Essay contest, universities — David Wood @ 11:28 pm

One of the goals I set myself for 2008 involves influencing university research departments around the world to become more active in the areas of smartphones and Symbian OS.

With that goal in my mind, I decided to accept an invite to the “Wireless 2.0” conference organised by Silicon South West, here in Bristol, where I’ve travelled for the event. I decided to attend because of the mix of both industry and university attendees.

The event hosted a “Rising Star Awards Dinner” this evening, where six university students studying electrical engineering (or a related degree) received special awards – a plaque and a handy amount of spending money. There was one winner from each of the six universities in the area covered by Silicon South West: Bath, Bournemouth, Bristol, Exeter, Plymouth, and West of England. It was heart warming to hear the personal testimonies of the winners (and their university tutors).

But links between commercial research departments and university research departments aren’t always so rosy. Universities and industry have many overlapping interests, but also some conflicting cultures. I see Symbian as having had mixed success, historically, in relations with universities:

  • On the clearly positive side, we’ve run good graduate recruitment and induction programs, every year since 1993 (that was in the Psion days, pre-Symbian); these have gone from strength to strength.
  • On the increasingly positive side, 58 universities have enrolled into the Symbian Academy program, in which Symbian supports university lecturers to deliver academic courses on Symbian OS software development.
  • On the “could do better” side, there are still only a small number of truly productive ongoing research collaborations between Symbian and individual universities, in which findings from university research projects regularly feed into Symbian’s roadmap (and vice versa).

It turns out that it’s not just Symbian that feels somewhat uncomfortable about the limited benefits realised from attempted collaboration with universities. Other commercial companies have noted similar concerns. And this has even become a field of academic study in its own right, known as (amongst other names) UIC, meaning University-Industry Collaboration. My friend Joel West of San Jose State University recently attended a two-conference on UIC at University of California, Irvine, and wrote up his observations. There’s lots to ponder there. For example, Joel described three pieces of advice on successful UIC negotiations, as given in a presentation by UIDP executive director Anthony Boccanfuso:

  1. A successful UI collaboration should support the mission of each partner. Any effort in conflict with the mission of either partner will fail. (Joel’s translation: all deals must be win-win)
  2. Institutional practices and national resources should focus on fostering appropriate long term partnerships between universities and industry. (It’s more than just the money)
  3. Universities and industry should focus on the benefits to each party that will result from collaborations by streamlining negotiations to ensure timely conduct of the research and the development of the research findings. (There is a finite window for commercialization)

With Symbian research projects, one additional hiccup has been the difficulties in allowing universities access to Symbian OS source code. Time and again we’ve been discussing an attractive-sounding joint research project with a university, when we’ve realised that the project would need more visibility of Symbian source code than was possible under the existing licensing rules. And that’s constrained the kinds of projects we can consider. (This realisation was just one of many that led to an increasing desire inside the Symbian ecosystem to find ways to liberalise access to our source code – and thus helped to set the scene for the mega-decision to embrace open source principles.)

However, not all research requires close access to source code. With that thought in mind, Symbian Research decided a few weeks back to launch the Symbian Student Essay Contest. This involves students writing an essay of no more than eight pages on the general topic “The next wave of smartphone innovation – issues and opportunities with smartphone technologies“. Up to ten students will receive a prize of UKP 1000. (See here for the contest rules.)

This prize contest has some common principles with the Silicon South West “Rising Star Awards”:

  • We’re seeking to encourage and reward individual students who show particular insight into this ever-more important set of ideas
  • We’re also seeking to inspire individual universities to give a higher priority to this domain of study.

High quality essays from a university will indicate to Symbian that there is good smartphone expertise in that university. That’s something we’re particularly interested to find out, since Symbian Research needs to decide which universities worldwide should receive higher priority attention for future collaborative research projects. That’s a tough decision to make.

Footnote: At tonight’s dinner, Prof Joe McGeehan of the University of Bristol mentioned that wise heads had been advising him, ever since 1973, that “there’s no future in research in wireless communications”. Thankfully, he persistently ignored these skeptics, and the field has indeed grown and grown. There’s now an impressive list of local south-west companies that have world-beating wireless technologies. I’m looking forward to hearing, tomorrow, what they have to say. The future of smartphones is, of course, a big part in “wireless 2.0”, but there’s lots more going on at the same time.

8 July 2008

Taming the security risks of going open source

Filed under: descriptors, Open Source, security — David Wood @ 5:05 pm

The Wireless Informatics Forum asks (here and here),

Will an open source model expose Symbian’s security flaws?

I wonder what security implications are being presented to Symbian? In the computing world there’s plenty of debate about the impact of opening up previously proprietary code. The primary concern being that an open source model exposes code not only to benevolent practitioners but also to malevolent attackers…

With much of the mobile industry steering towards m-commerce initiatives, potential security risks must be considered…

How much of the legacy Symbian code will be scrapped and built from scratch according to open source best practice?

First, I agree with the cardinal importance of security, and share the interest in providing rock solid enablers for m-commerce initiatives.

But I’m reasonably optimistic that the Symbian codebase is broadly in a good state, and won’t need significant re-writes. That’s for three reasons:

  1. Security is something that gets emphasised all the time to Symbian OS developers. The whole descriptor system for handling text buffers was motivated, in part, by a desire to avoid buffer overrun errors – see my May 2006 article “The keystone of security“.
  2. Also, every now and then, Symbian engineers have carried out intense projects to review the codebase, searching high and low for lurking defects.
  3. Finally, Symbian OS code has been available for people from many companies to look at for many years – these are people with CustKit or DevKit licenses. So we’ve already had at least some of the benefits of an open source mode of operation.

On the other hand, there’s going to be an awful lot of code in the overall Symbian Foundation Platform – maybe 30+ million LOC. And that code comes from many different sources, and was written under different cultures and with different processes. For that reason, we’ve said it could be up to two years before the entire codebase is released as Open Source. (As my colleague John Forsysth explains, in the section entitled “Why not open source on day 1?”, there are other reasons for wanting to take time over this whole process.) Of course we’d like to go faster, but we don’t at this stage want to over-promise.

So to answer the question, I expect the lion’s share of the Symbian codebase to stay in place during the migration, no doubt with some tweaks made here and there. Time will tell how much of the peripheral pieces of code need to be re-written.

7 July 2008

Symbian signed and openness

Filed under: malware, openness, Symbian Foundation, Symbian Signed — David Wood @ 8:13 pm

The team at Telco2.0 have run some good conferences, and there’s much to applaud in their Manifesto. Recently, the Telco2.0 blog has run a couple of hit-and-miss pieces of analysis on the Symbian Foundation. There’s a lot of speculation in their pieces, and alas, their imagination has run a bit wild. The second of these pieces, in particular, is more “miss” than “hit”. Entitled “Symbian goes open – or does it?”, the piece goes most clearly off the rails when it starts speculating about Symbian Signed:

…the Symbian signing process doesn’t just apply to changes to Symbian itself — it applies to all applications developed for use on Symbian, at least ones that want to use a list of capabilities that can be summed up as “everything interesting or useful”. I can’t even sign code for my own personal use if it requires, say, SMS functionality. And this also affects work in other governance regimes. So if I write a Python program, which knows no such thing as code-signing and is entirely free, I can’t run it on an S60 device without submitting to Symbian’s scrutiny and gatekeeping. And you though Microsoft was an evil operating system monopolist…

This makes the Symbian signing process sound awful. But wait a minute. Isn’t there a popular book, “Mobile Python – rapid prototyping of applications on the mobile platform“, written by Jurgen Scheible and Ville Tuulos, that highlights on the contrary just how simple it is to get going with sophisticated Python applications on S60 devices? Yep. And what do we find as early as page 45 of the book? A two-line program that sends an SMS message:

import messaging
messaging.sms_send(“+14874323981″, u”Greetings from PyS60”)

I tried it. It took less than an hour to download and install the SIS files for the latest version of PyS60 from Sourceforge, and then to type in and run this program. (Of course, you change the phone number before testing the app.) Nowhere in the process is there any submitting of the newly written program “to Symbian’s scrutiny and gatekeeping”. The fanciful claims of the Telco2.0 piece are refuted in just two lines of Python.

So what’s really going on here? How is it that normally intelligent analysts and developers often commit schoolboy howlers when they start writing about Symbian Signed? (Unfortunately, the Telco2.0 writers are by no means unique in getting the Symbian Signed facts wrong.) And why, when people encounter glitches or frustrations in the implementation of Symbian Signed, are they often too ready to criticise the whole system, rather than being willing to ask what small thing they might do differently, to get things working again?

I suspect three broader factors are at work:

1. An over-casual approach to the threat of mobile malware

Symbian Signed is part of an overall system that significantly reduces the threat of mobile viruses and the like. Some developers or analysts sometimes give the impression that they think they stand immune from malware – that it’s only a problem that impacts lesser mortals, and that the whole anti-malware industry is a “cure that’s worse than the disease”. Occasionally I sympathise with this view, when I’m waiting for my desktop PC to become responsive, with its CPU cycles seemingly being consumed by excessive scanning and checking for malware. But then I remember the horrors that ensue if the defences are breached – and I remember that the disease is actually worse than the cure.

If we in the mobile industry take our eye off the security ball and allow malware to take root in mobile phones in ways similar to the sad circumstances of desktop PCs, it could produce a meltdown scenario in which end users decide in droves that the extra intelligence of smart mobile phones brings much more trouble than it’s worth. And smartphones would remain of only niche interest. For these reasons, at least the basic principles of Symbian Signed surely deserve support.

2. A distrust of the motivation of network operators or phone manufacturers

The second factor at work is a distrust of control points in the allocation of approvals for applications to have specific capabilities. People reason something like this:

  • OK, maybe some kind of testing or approvals process does makes sense
  • But I don’t trust Entity-X to do the approving – they have mixed motivations.

Entity-X could be a network operator, that may fear losing (for example) their own SMS revenues if alternative IM applications were widely installed on their phones. Or Entity-X could be a device manufacturer, like Apple, that might decide to withhold approval from third party iPhone applications that provide download music stores to compete with iTunes.

Yes, there’s a potential risk here. But there are two possible approaches to this risk:

  1. Decide that there’s no possible solution, and therefore the power of a system like Symbian Signed should be criticised and diminished
  2. Work to support more of the decision making happening in a fully transparent and independent way, outside of the influence of mixed motivations.

The second approach is what’s happening with the Symbian Foundation. The intent with the Symbian Foundation is to push into the public sphere, not only more and more of the source code of the Symbian Platform, but also as much of the decision-making as possible – including the rules and processes for approval for Symbian Signing.

Incidentally, the likely real-world alternative to a single, unified scheme for reviewing and signing applications is that there will be lots of separately run, conflicting, fragmented signing schemes. That would be a BAD outcome.

3. A belief that openness trumps security

This brings us to the final factor. I suspect that people reason as follows:

  • OK, I see the arguments for security, and (perhaps) for quality assurance of applications
  • But Symbian Signed puts an obstacle in the way of openness, and that’s a worse outcome
  • Openness is the paramount virtue, and needs to win.

As a great fan of openness, I find myself tempted by this argument from time to time. But it’s a misleading argument. Instead, freedom depends on a certain stability in the environment (including a police force and environmental inspectors). Likewise, openness depends on a basic stability and reliability in the network, in the underlying software, and in the way the ecosystem operates. Take away these environmental stability factors, and you’ll lose the ability to meaningfully create innovative new software.

The intention behind Symbian Signed to help maintain the confidence of the industry in the potential of smartphones – confidence that smartphones will deliver increasing benefits without requiring debilitating amounts of support or maintenance.

It’s true that the rules of Symbian Signed can take a bit of learning. But hey, lots of other vital pieces of social or technical infrastructure likewise take time to appreciate. In my mind, the effort is well worth it: I see Symbian Signed as part of the bedrock of meaningful openness, instead of some kind of obstacle.

6 July 2008

Clear thinking about open source

Filed under: GPL, Open Source — David Wood @ 9:12 am

What’s the best book to read for an introduction to Open Source?” That’s a question I’ve been asked several times in the last fortnight – as many of my colleagues in and around Symbian have realised that Open Source is a more complex and more intriguing subject than they first thought. (Of course, the announcements of 24 June have had something to do with this increased interest level.)

I’m still not sure how to answer that question. Over the years, I’ve read lots of books about Open Source – but with the passage of time, I’ve forgotten what I’ve learnt from each book.

Two books that stick out in my mind, through the veil of intervening years, as particularly enjoyable are:

Of these, the latter stands out as an especially easy and engrossing read. (It also happens to be the first serious book read independently by all three members of my immediate family – my wife, my son, and myself.) But when I pulled these two books from my bookshelf the other day and checked their inside cover, where I usually record the date when I purchase a book, I realised I had read them both as long ago as 2001. And Open Source has moved on a lot since that time. So while both these books are great sources of historical insight, readers will need to turn elsewhere for more up-to-date info.

A more recent book I remember making a big impact on my thinking at the time (2005, according to the inside cover) was:

Flicking through that book again just now, I see so many interesting snippets in it that I’m tempted to try to squeeze it back into my already hopelessly overfull reading in-box, for a second-time-round read. But even a 2005 book is dated.

That brings me to the book I’ve just finished reading:

Heather Meeker is Co-Managing Shareholder at the East Palo Alto law firm Greenberg Traurig. I first saw Heather speak at the Olswang “Open Source Summit” in London in November 2007. I was impressed at the time by the clarity of her grasp of the legal issues surrounding Open Source. Heather’s book has the same fine qualities:

  • It’s primarily exposition (education) rather than advocacy (evangelism)
  • I had many “of course!” and “aha!” moments while reading it
  • There are some particularly clear diagrams
  • Crucially, the language is easy to read
  • Also crucially, the book is comfortable both with legal matters and with technical matters (eg aspects of C and C++).

So I would say, this is the book to read, for a good account of the legal aspects surrounding open source.

One part that really shines comes about three quarters of the way through the book. It’s by far the best analysis I’ve read of “The border dispute of GPL2”. The question in the minds of many commercially-driven companies, of course, is whether they risk having to publish the source code of any of their own software that happens to interact with code (such as the Linux kernel) released under GPL. The book makes it strikingly clear that the commercial risks aren’t just because the original drafters of the GPL are philosophically opposed to closed source software. They’re also because of some deep-rooted ambiguities inside the license itself. To quote from page 188:

This is why attorneys who read the GPL quickly come to the conclusion that this phrase – upon which entire companies and development projects depend – is irretrievably vague.

And again from the footnote to page 189:

To provide context for nonlawyer readers, drafting unique (in the document) and unambiguous definitions is considered a baseline lawyering skill in transactional practice. Doing otherwise is generally a sign that the drafter is not a lawyer or, more precisely, does not have baseline drafting skills. If this seems harsh, consider that many programming languages require one, and only one, definition of a user-defined variable. (Some languages allow multiple definitions, or “overloading”, but using this feature requires intimate knowledge of the rules used by the compiler or interpreter to resolve them.) Failing to understand these rules properly creates bugs. So, in a sense, multiple or conflicting definitions [such as occur in the GPL] in a legal document, without express rules to resolve them, is a “bug” in drafting.

I can well imagine senior managers in mobile phone companies getting more and more worried as they read this book, finding more and more reasons, chapter by chapter (not just the chapter on the Border Dispute), to fear eventual legal cases against them, if they have code of their own in a phone that interacts with a GPL kernel.

Perhaps inevitably, the book has less to say about the EPL – which is the license to be used by the Symbian Foundation. After all, GPL is (the book suggests) the “most widely used license on the planet”. But the EPL has many fewer ambiguities, and is significantly more business-friendly.

Does v3 of GPL change matters? Not really. First, as the final chapters of the book make clear, many of the deep-rooted ambiguities remain, despite the massive (and impressive) work done by the drafting team for v3. Second, Linux is likely to remain on v2 GPL for the foreseeable future.

3 July 2008

Nanoscience and the mobile device: hopes and fears

Filed under: Morph, nanotechnology, Nokia, risks — David Wood @ 10:56 am

Nokia’s concept video of a future morphing mobile phone, released back in February, has apparently already been viewed more than two million times on YouTube. It’s a clever piece of work, simultaneously showing an appealing vision of future mobile devices and giving hints about how the underlying technology could work. No wonder it’s been popular.

So what are the next steps? I see that the office of Nokia’s CTO has now released a 5 page white paper that gives more of the background to the technologies involved, which are collectively known as nanotechnology. It’s available on Bob Iannucci’s blog, and it’s a fine read. Here’s a short extract:

After a blustery decade of hopes and fears (the fountain of youth or a tool for terrorists?), nanotechnology has hit its stride. More than 600 companies claim to use nanotechnologies in products currently on the market. A few interesting examples:

  • Stain-repellant textiles. A finely structured surface of embedded “nanowhiskers” keeps liquids from soaking into clothing—in the same way that some plant leaves keep themselves clean.
  • UV-absorbing sunscreen. Using nanoparticulate zinc oxide or titanium dioxide, these products spread easily and are fully transparent —while absorbing ultraviolet rays to prevent sunburn.
  • Purifying water filters. Aluminum oxide nanofibers with unusual bioadhesive properties are formulated into filters that attract and retain electronegative particles such as bacteria and viruses.
  • Windshield defoggers. A transparent lacquer of carbon nanotubes connects to the vehicle’s electrical source to evenly warm up the entire surface of the glass.

Even more interesting, to my mind, than the explanation of what’s already been accomplished (and what’s likely to be just around the corner), is a set of questions listed in the white paper. (In my view, the quality of someone’s intelligence is often shown more in the quality of the questions they ask than in the quality of the answers they give to questions raised by other people.) Here’s what the white paper says on this score:

As Nokia looks toward the mobile device of 2015 and beyond, our research teams, our partner academic institutions, and other industry innovators are finding answers to the following questions:

  1. What will be the form factors, functionalities, and interaction paradigms preferred by users in the future?
  2. How can the device sense the user’s behavior, physiological state, physical context, and local environment?
  3. How can we integrate energy-efficient sensing, computing, actuation, and communication solutions?
  4. How can we create a library of reliable and durable surface materials that enable a multitude of functions?
  5. How can we develop efficient power solutions that are also lightweight and wearable?
  6. How can we manufacture functional electronics and optics that are transparent and compliant?
  7. How can we move the functionality and intelligence of the device closer to the physical user interface?
  8. As we pursue these questions, how can we assess—and mitigate— possible risks, so that we introduce new technologies in a globally responsible manner?

That’s lots to think about! In response to the final question, one site that has many promising answers is the Center for Responsible Nanotechnology, founded by Mike Treder and Chris Phoenix. As he explains in his recent article “Nano Catastrophes“, Mike’s coming to Oxford later this month to attend a Conference on Global Catastrophic Risks, where he’ll be addressing these issues. I’ll be popping down that weekend to join the conference, and I look forward to reporting back what I find.

This is a topic that’s likely to run and run. Both the potential upsides and the potential downsides of nanotechnology are enormous. It’s well worth lots more serious research.

1 July 2008

Win-win: how the Symbian Foundation helps Google to win

Filed under: collaboration, Google, RIM — David Wood @ 9:29 am

Olga Kharif of Business Week has found an interesting new angle on the Symbian Foundation announcement, in her article “How Nokia’s Symbian Move Helps Google“:

Nokia rocked the wireless industry June 24 with news it would purchase the portion of Symbian, a maker of mobile-phone software, that it didn’t already own—and then give away the software for nothing. …

But Nokia’s move may play right into Google’s hands, by helping to nurture a blossoming of the mobile Web and spur demand for all manner of cell-phone applications—and most important, the ads sold by Google. “There’s nothing to say that this isn’t what Google’s plan was all along,” says Kevin Burden, research director, mobile devices at consultancy ABI Research. “They might have wanted a more open device environment anyway. This might have been Google’s end game.”

My comment on this analysis is: why does it need to be a bad thing for Nokia and Symbian, if the outcome has benefits for Google? If Google wins (by being able to sell more ads on mobile phones than before), does it mean that Nokia and Symbian lose? I think not. I prefer to see this as being mutually beneficial.

The truth is, many of the companies who provide really attractive applications and services for Symbian-powered phones are both complementors and competitors of Symbian:

  • RIM provide the first class BlackBerry email service that runs on my Symbian-powered Nokia E61i and which I use virtually every hour I’m awake; they also create devices that run their own operating system, and which therefore compete with Symbian devices
  • Google, as well as working on Android, provide several of the other mobile applications that I use heavily on my E61i, including native Google Maps and native Google Search.

If companies like RIM and Google are able, as a result of the Symbian Foundation and its unification of the currently separate Symbian UIs (not to mention the easier accessibility of the source code), to develop new and improved applications for Symbian devices more quickly than before – then it will increase the attractiveness of these devices. RIM and Google (and there are many others too!) will benefit from increased services revenues which these mobile apps enable. Symbian and the various handset manufacturers who use the Symbian platform will benefit from increased sales and increased usage of the handsets that contain these attractive new applications and services. Win-win.

I see two more ways in which progress by any one of the open mobile operating systems (whether Android or the Symbian Platform, etc) boosts the others:

  1. The increasing evident utility of the smartphones powered by any one of these operating systems, helps spread word of mouth among end users that, hey, smartphones are pretty useful things to buy. So next time people consider buying a new phone, they’ll be more likely to seek out one that, in addition to good voice and text, also supplies great mobile web access, push email, and so on. The share of smartphones out of all mobile phones will rise.
  2. Progress of these various open mobile operating systems will help the whole industry to see the value of standard APIs, free exchange of source code, open gardens, and so on. The role of open operating systems will increase and that of closed operating systems will diminish.

In both cases, a rising tide will lift all boats. Or in the words of Symbian’s motto, it’s better to seek collaboration than to seek competition.

29 June 2008

The enhancement of the dream

Filed under: collaboration, Psion, Symbian Story — David Wood @ 12:49 pm

Did this week’s announcements about the Symbian Foundation herald “The end of the dream“, as Michael Mace suggests?

No matter how it works out in the long run, the purchase of Symbian by Nokia marks the end of a dream — the creation of a new independent OS company to be the mobile equivalent of Microsoft. Put a few beers into former Symbian employees and they’ll get a little wistful about it, but the company they talk about most often is Psion, the PDA company that spawned Symbian. …

What makes the Psion story different is that many of the Psion veterans had to leave the UK, or join non-UK companies, in order to become successful. Some are in other parts of Europe, some are in the US, and some are in London but working for foreign companies. This is a source of intense frustration to the Psion folks I’ve talked with. They feel like not only their company failed, but their country failed to take advantage of the expertise they had built.

I understand the thrust of this argument, but I take a different point of view. Rather than seeing this week’s announcement as “the end of the dream”, I see it as enabling “the enhancement of the dream”.

During the second half of 2007, Symbian’s executive team led a company-wide exercise to find a set of evocative, compelling words that captured what we called “The Symbian Story”. Some of the words we came up with were new, but the sentiment they conveyed was widely recognised as deriving from the deep historic roots of the company. Here are some extracts:

  • The world is seeing a revolution in smarter mobile devices
  • Convergence is real, happening now and coming to everyone, everywhere
  • Our mission is to be the OS chosen for the converged mobile world
  • No one else can seize it like we can
  • Our talented people, building highly complex software, have established a smartphone OS that leads the industry
  • We welcome rapid change as the way to stay ahead
  • We’ll work together to fulfill our potential to be the most widely used software on the planet, at the heart of an inspiring, exciting and rewarding success story.

This story – which we might also call a dream, or a vision – has by no means ended with this week’s announcements. On the contrary, these steps should accelerate the outcome that’s been in our minds for so long. There will be deeper collaboration and swifter innovation – making it even more likely that the Symbian platform will become in due course the most widely used on the planet.

But what about the dream that Symbian (or before it, Psion) could be “the next Microsoft”?

In terms of software influence, and setting de facto standards, this dream still holds. In terms of boosting the productivity and enjoyment of countless people around the world, through the careful deployment of smart software which we write, the dream (again) still holds. In terms of the founders of the company joining the ranks of the very richest people in the world, well, that’s a different story, but that fantasy was never anything like so high in our motivational hierarchy.

What about the demise of “British control” over the software? Does the acquisition of UK-based Symbian by Finland-based Nokia indicate yet another “oh what might have been” for the United Kingdom plc?

Once again, I prefer to take a different viewpoint. In truth, the software team has long ago ceased to be dominated by home-bred British talent. The present Symbian Leadership Team has one person from Holland and one from Norway. 50% of the Research department that I myself head were born overseas (in Russia, Greece, and Canada). And during the Q&A with Symbian’s Nigel Clifford and Nokia’s Kai Oistamo that took place in London at all-hands meetings of Symbian employees on the 24th of June, questions were raised using almost every accent under the sun. So rather than Symbian being a British-run company, it’s better to see us as a global company that happens to be headquartered in London, and which benefits mightily from talent born all over the world.

Not only do we benefit from employees born worldwide, we also benefit (arguably even more highly) from our interactions with customers and partners the world over. As Symbian morphs over the next 6-9 months into a new constellation of organisations (including part that works inside Nokia, and part that has an independent existence as the Symbian Foundation), these collaborative trends should intensify. That’s surely a matter for celebration, not for remorse.

The five laws of fragmentation

Filed under: fragmentation, leadership, Open Source, Symbian Foundation — David Wood @ 9:42 am

As discussion of the potential for the Symbian Foundation gradually heats up, the topic of potential fragmentation of codelines keeps being raised. To try to advance that discussion, I offer five laws of fragmentation:

1. Fragmentation can have very bad consequences

Fragmentation means there’s more than one active version of a software system, and that add-on or plug-in software which works fine on one of these versions fails to work well on other versions. The bad consequences are the extra delays this causes to development projects.

Symbian saw this with the divergence between our v7.0 and v7.0s releases. (The little ‘s’ was sometimes said to stand for “special”, sometimes for “strategic”, and sometimes for “Series 60”.) UIQ phones at the time were based on our v7.0 release. However, the earliest Series 60 devices (such as the Nokia 7650 “Calypso”) had involved considerable custom modifications to the lower levels of the previous Symbian OS release, v6.1, and these turned out to be incompatible with our v7.0. As a pragmatic measure, v7.0s was created, that had all of the new technology features introduced for v7.0, but which kept application-level compatibility with v6.1.

On the one hand, v7.0s was a stunning success: it powered the Nokia 6600 “Calimero” which was by far the largest selling Symbian OS phone to that time. On the other hand, the incompatibilities between v7.0 and v7.0s caused no end of difficulties to developers of add-on or plug-in software for the phones based on these two versions:

  • The incompatibilities weren’t just at the level of UI – UIQ vs. Series 60
  • There were also incompatibilities at many lower levels of the software plumbing – including substantial differences in implementation of the “TSY” system for telephony plug-ins
  • There were even differences in the development tools that had to be used.

As a result, integration projects for new phones based on each of these releases ran into many delays and difficulties.

Symbian OS v8 was therefore designed as the “unification release”, seeking as much compatibility as possible with both of the previous branches of codeline. It made things considerably better – but some incompatibilities still remained.

As another example, I could write about the distress caused to the Symbian partner ecosystem by the big change in APIs moving from v8 to v9 (changes due mainly to the new PlatSec system for platform security). More than one very senior manager inside our customer companies subsequently urged us in very blunt language, “Don’t f****** break compatibility like that ever again!”

Looking outside the Symbian world, I note the following similar (but more polite) observation in the recent Wall Street Journal article, “Google’s Mobile-Handset Plans Are Slowed“:

Others developers cite hassles of creating programs while Android is still being completed [that is, while it is undergoing change]. One is Louis Gump, vice president of mobile for Weather Channel Interactive, which has built an Android-based mobile weather application. Overall, he says, he has been impressed by the Google software, which has enabled his company to build features such as the ability to look up the weather in a particular neighborhood.

But he says Weather Channel has had to “rewrite a few things” so far, and Google’s most recent revision of Android “is going to require some significant work,” he says.

2. Open Source makes fragmentation easier

If rule 1 was obvious (even though some open source over-enthusiasts seem to be a bit blind to it), rule 2 should be even clearer. Access to the source code for a system (along with the ability to rebuild the system) makes it easier for people to change that software system, in order to smooth their own development purposes. If the platform doesn’t meet a particular requirement of a product that is being built from that platform, hey, you can roll up your sleeves and change the platform. So the trunk platform stays on v2.0 (say) while your branch effectively defines a new version v2.0s (say). That’s one of the beauties of open source. But it can also be the prelude to fragmentation and all the pain which will ensue.

The interesting question about open source is to figure out the circumstances in which fragmentation (also known as “forking”) occurs, and when it doesn’t.

3. Fragmentation can’t be avoided simply by picking the right contract

Various license contracts for open source software specify circumstances in which changes made by users of an open source platform need to be supplied back into the platform. Different contracts specify different conditions, and this can provoke lengthy discussions. However, for the moment, I want to sidestep these discussions and point out that contractual obligations, by themselves, cannot cure all fragmentation tendencies:

  • Even when users of a platform are obligated to return their changes to the platform, and do so, it’s no guarantee that the platform maintainers will adopt these changes
  • The platform maintainers may dislike the changes made by a particular user, and reject them
  • Although a set of changes may make good sense for one set of users, they may involve compromises or optimisations that would be unacceptable to other users of the platform
  • Reasons for divergence might include use of different hardware, running on different networks, the need to support specific add-on software, and so on.

4. The best guarantee against platform fragmentation is powerful platform leadership

Platform fragmentation has some similarities with broader examples of fragmentation. What makes some groups of people pull together for productive collaboration, whereas in other groups, people diverge following their own individual agendas? All societies need both cooperation and competition, but when does the balance tilt too far towards competition?

A portion of the answer is the culture of the society – as reflected in part in its legal framework. But another big portion of the answer is in the quality of the leadership shown in a society. Do people in the group believe that the leaders of the group can be relied on, to keep on “doing the right thing”? Or are the leaders seen as potentially misguided or incompetent?

Turning back to software, users of a platform will be likely to stick with the platform (rather than forking it in any significant way) if they have confidence that the people maintaining the trunk of the platform are:

  1. well-motivated, for the sake of the ecosystem as a whole
  2. competent at quickly and regularly making valuable new high quality releases that (again) meet the needs of the ecosystem as a whole.

Both the “character” (point 1) and the “competence” (point 2) are important here. As Stephen Covey (both father and son) have repeatedly emphasised, you can’t get good trust without having both good character and good competence.

5. The less mature the platform the more likely it will be to fragment, especially if there’s a diverse customer base

If a platform is undergoing significant change, users can reason that it’s unlikely to coalese any time soon into a viable new release, and they’ll be more inclined to carry on working with their own side version of the platform, rather than waiting for what could be a long time for the evolving trunk of the platform to meet their own particular needs.

This tendency is increased if there are diverse customers, who each have their own differing expectations and demands for the still-immature software platform.

In contrast, if the core of the platform is rock-solid, and changes are being carefully controlled to well-defined areas within the platform, customers will be more likely to want to align their changes with the platform, rather than working independently. Customers will reason that:

  • The platform is likely to issue a series of valuable updates, over the months and years ahead
  • If I diverge from the platform, it will probably be hard, later on, to merge the new platform release material into my own fork
  • That is, if I diverge from the platform, I may gain short-term benefit, but then I’ll likely miss out on all the good innovation that subsequent platform releases will contain
  • So I’d better work closely with the developers of the trunk of the platform, rather than allowing my team to diverge from it.

Footnote: Personally I see the Symbian Foundation codeline to be considerably more mature (tried and tested in numerous successful smartphones) than the codeline in any roughly similar mobile phone oriented Linux-based foundation. That’s why I expect that the Symbian Foundation codeline will fall under less fragmentation pressure. I also believe that Symbian’s well-established software development processes (such as careful roadmap management, compatibility management, system architecture review, modular design, overnight builds, peer reviews, and systematic and extensive regression testing) are set to transfer smoothly into this new and exciting world, maintaining our track record of predictable high-quality releases – further lessening the risks of fragmentation.

27 June 2008

Aubrey de Grey’s preposterous campaign to cure aging

Filed under: Methuselah, UKTA — David Wood @ 6:39 am


At first sight, Aubrey de Grey is clearly preposterous. Not only does he look like a relic of the middle ages, with his huge long beard, but his ideas on potentially “curing aging” within the present generation apparently run counter to many well-established principles of science, society, philosophy, and even religion. So it’s no surprise that his ideas arouse some fervent opposition. See for example a selection of the online comments to the article about him, “The Fight to End Aging Gains Legitimacy, Funding“, in today’s Wired:

Guess what, jackasses… we’re supposed to die! Look up the 2nd law of thermodynamics, you might learn something. We’ve even evolved molecular mechanisms to make sure our cells can’t reproduce beyond a certain point… check out “Hayflick limit” on Wikipedia. The stark biological reality is that we are here to pass along our genes to our progeny and the DIE. What the hell, wasn’t this settled back in the 1800s? Why are we debating this stupidity?

and

Aging and death is an evolutionary response to cancer in mammals. You’ll have to resolve the cancer issue (and remember kids – cancer is actually a whole lot of different but related diseases) before you can resolve the aging and death issue.

However, first appearances can be deceptive. I had my own first serious discussions with Aubrey at the “Tomorrow’s People” conference in Oxford in March 2006. Not only did I pose my own questions, I listened and observed with increasing admiration as Aubrey addressed issues posed by other audience members, and during many coffee breaks as the conference progressed. Later that year in August, at Transvision 2006 in Helsinki (by the way, as well as being home to the world’s leading mobile phone manufacturer, Finland hosts a disproportionate number of self-described transhumanists; perhaps both reflect an unusually pragmatic yet rational approach to life), I had the chance to continue these discussions and observations. I saw that Aubrey has good, plausible answers to his critics. You can find many of these answers on his extensive website.

Since that time, I’ve been keen to take the opportunity to watch Aubrey speak whenever it arises. Unfortunately, I’ll miss the conference that’s happening at UCLA this weekend: “AGING: The Disease – The Cure – The Implications” – which has a session this afternoon (4pm West Coast time) that’s open to the general public. However, I’m eagerly looking forward to some good debate at the July 12 meeting of the UKTA, at Birkbeck College in London, where Aubrey will be one of the speakers on the topic, “Living longer and longer yet healthier and healthier: realistic grounds for hope?”. (If you’re interested to attend that, and you Facebook, you can indicate your interest and RSVP here.)

As I’ve come to see it, addressing aging by the smart and imaginative uses of technology fits well with the whole programme of medicine (which constantly intervenes to prevent nature taking its “natural toll” on the human body). It also has some surprising potential cost-saving benefits, as aging-related diseases are responsible for a very significant part of national health expenditure. But that’s only the start of the argument. To help explore many of the technical byways of this argument, I strongly recommend Aubrey’s 2007 book, “Ending Aging: The rejuvenation breakthroughs that could reverse human aging in our lifetime“.

In terms of disruptive technology trends (some of which I study in my day job), this is about as big as it gets.

I’ll end by quoting from today’s Wired article:

“In perhaps seven or eight years, we’ll be able to take mice already in middle age and treble their lifespan just by giving them a whole bunch of therapies that rejuvenate them,” de Grey said. “Gerontologists all over, even my most strident critics, will say yes, Aubrey de Grey is right.”

Even as he imagines completing Gandhi’s fourth step, de Grey always keeps his eye on the ultimate prize — the day when the aging-as-disease meme reaches the tipping point necessary to funnel really big money into the field.

“The following day, Oprah Winfrey will be saying, aging is a disease and let’s fix it right now,” de Grey said.

« Newer PostsOlder Posts »

Blog at WordPress.com.