dw2

29 August 2010

Understanding humans better by understanding evolution better

Filed under: collaboration, deception, evolution, RSA, UKH+ — David Wood @ 5:54 am

Many aspects of human life that at first seem weird and hard to explain can make a lot more sense once you see them from the viewpoint of evolution.

It was Richard Dawkins’ book “The Selfish Gene” which first led me to that conclusion, whilst I was still at university.  After “The Selfish Gene”, I read “Sociobiology: the new synthesis“, by E.O. Wilson, which gave other examples.  I realised it was no longer necessary to refer to concepts such as “innate wickedness” or “original sin” to explain why people often did daft things.  Instead, people do things because (in part) of underlying behavioural patterns which tended to make their ancestors more likely to leave successful offspring.

In short, you can deepen your understanding of  humans if you understand evolution.  On the whole, attempts to get humans to change their behaviour will be more likely to succeed if they are grounded in an understanding of the real factors that led humans to tend to behave as they do.

What’s more, you can understand humans better if you understand evolution better.

In a moment, I’ll come to some interesting new ideas about the role played by technology in evolution.  But first, I’ll mention two other ways in which an improved understanding of evolution sheds richer light on the human condition.

1. Evolution often results in sub-optimal solutions

In places where an intelligent (e.g. human) designer would “go back to the drawing board” and introduce a new design template, biological evolution has been constrained to keep working with the materials that are already in play.  Biological evolution lacks true foresight, and cannot do what human designers would call “re-factoring an existing design”.

I’ve written on this subject before, in my review “The human mind as a flawed creation of nature” of the book by Gary Marcus, “Kluge – the haphazard construction of the human mind” – so I won’t say much more about that particular topic right now.  But I can’t resist including a link to a fascinating video in which Richard Dawkins demonstrates the absurdly non-optimal route taken by the laryngeal nerve of the giraffe.  As Dawkins says in the video, this nerve “is a beautiful example of historical legacy, as opposed to design”.  If you haven’t seen this clip before, it’s well worth watching, and thinking about the implications.

2. Evolution can operate at multiple levels

For a full understanding of evolution, you have to realise it can operate at multiple levels:

  • At the level of individual genes
  • At the level of individual organisms
  • At the level of groups of cooperating organisms.

At each level, there are behaviours which exist because they made it more likely for an entity (at that level) to leave descendants.  For example, groups of animals tend to survive as a group, if individuals within that group are willing, from time to time, to sacrifice themselves for the sake of the group.

The notion of group selection is, however, controversial among evolutionary theorists.  Part of the merit of books such as The Selfish Gene was that it showed how altruistic behaviour could be explained, in at least some circumstances, by looking at the point of survival of individual genes.  If individual A sacrifices himself for the sake of individuals B and C within the same group, it may well be that B and C carry many of the same genes as individual A.  This analysis seems to deal with the major theoretical obstacle to the idea of group selection, which is as follows:

  • If individuals A1, A2, A3,… all have an instinct to sacrifice themselves for the sake of their wider group, it may well mean, other things being equal, that this group is initially more resilient than competing groups
  • However, an individual A4 who is individually selfish, within that group, will get the benefit of the success of the group, and the benefit of individual survival
  • So, over time, the group will tend to contain more individuals like the “free-rider” A4, and fewer like A1, A2, and A3
  • Therefore the group will degenerate into selfish behaviour … and this shows that the notion of “group selection” is flawed.

Nevertheless, I’ve been persuaded by writer David Sloan Wilson that the notion of group selection can still apply.  He gives an easy-to-read account of his ideas in his wide-ranging book “Evolution for Everyone: How Darwin’s Theory Can Change the Way We Think About Our Lives“.  In summary:

  • Group selection can apply, provided the group also has mechanisms to reduce free-riding behaviour by individuals
  • For example, people in the group might have strong instincts to condemn and punish people who try to take excess advantage of the generosity of others
  • So long as these mechanisms keep the prevalence of free-riding below a certain threshold, a group can reach a stable situation in which the altruism of the majority continues to benefit the group as a whole.

(To be clear: this kind of altruism generally looks favourably only at others within the same group.  People who are outside your group won’t benefit from it.  An injunction such as “love your neighbour as yourself” applied in practice only to people within your group – not to people outside it.)

To my mind, this makes sense of a great deal of the mental gymnastics that we can observe: people combine elements of surreptitiously trying to benefit themselves (and their own families) whilst seeking to appearing to the group as a whole as being “good citizens”.  In turn, we are adept at seeing duplicity and hypocrisy in others.  There’s been a long “arms race” in which brains have been selected that are better at playing both sides of this game.

Incidentally, for another book that takes an entertaining and audacious “big picture” view of evolution and group selection, see the barn-storming “The Lucifer Principle: A Scientific Expedition into the Forces of History” by Howard Bloom.

3. The role of technology in evolution

At first sight, technology has little to do with evolution.  Evolution occurred in bygone times, whilst technology is a modern development – right?

Not true. First, evolution is very much a present-day phenomenon (as well as something that has been at work throughout the whole history of life).  Diseases evolve rapidly, under pressures of different regimes of anti-bacterial cocktails.  And there is evidence that biological evolution still occurs for humans.  A 2009 article in Time magazine was entitled “Darwin Lives! Modern Humans Are Still Evolving“.  Here’s a brief extract:

One study, published in PNAS in 2007 and led by John Hawks, an anthropologist at the University of Wisconsin at Madison, found that some 1,800 human gene variations had become widespread in recent generations because of their modern-day evolutionary benefits. Among those genetic changes, discovered by examining more than 3 million DNA variants in 269 individuals: mutations that allow people to digest milk or resist malaria and others that govern brain development.

Second, technology is itself an ancient phenomenon – including creative use of sticks and stones.  Benefits of very early human use of sticks and stones included fire, weapons, and clothing.  What’s more, the advantages of use of tools allowed a strange side-effect in human genetic evolution: as we became technologically stronger, we also became biologically weaker.  The Time magazine article mentioned above goes on to state the following:

According to anthropologist Peter McAllister, author of “Manthropology: the Science of Inadequate Modern Man“, the contemporary male has evolved, at least physically, into “the sorriest cohort of masculine Homo sapiens to ever walk the planet.” Thanks to genetic differences, an average Neanderthal woman, McAllister notes, could have whupped Arnold Schwarzenegger at his muscular peak in an arm-wrestling match. And prehistoric Australian Aborigines, who typically built up great strength in their joints and muscles through childhood and adolescence, could have easily beat Usain Bolt in a 100-m dash.

Timothy Taylor, Reader in Archaeology at the University of Bradford and editor-in-chief of the Journal of World Prehistory, tackles this same topic in his recent book “The Artificial Ape: How Technology Changed the Course of Human Evolution“.

Amazon.com describes this book as following:

A breakthrough theory that tools and technology are the real drivers of human evolution.

Although humans are one of the great apes, along with chimpanzees, gorillas, and orangutans, we are remarkably different from them. Unlike our cousins who subsist on raw food, spend their days and nights outdoors, and wear a thick coat of hair, humans are entirely dependent on artificial things, such as clothing, shelter, and the use of tools, and would die in nature without them. Yet, despite our status as the weakest ape, we are the masters of this planet. Given these inherent deficits, how did humans come out on top?

In this fascinating new account of our origins, leading archaeologist Timothy Taylor proposes a new way of thinking about human evolution through our relationship with objects. Drawing on the latest fossil evidence, Taylor argues that at each step of our species’ development, humans made choices that caused us to assume greater control of our evolution. Our appropriation of objects allowed us to walk upright, lose our body hair, and grow significantly larger brains. As we push the frontiers of scientific technology, creating prosthetics, intelligent implants, and artificially modified genes, we continue a process that started in the prehistoric past, when we first began to extend our powers through objects.

Weaving together lively discussions of major discoveries of human skeletons and artifacts with a reexamination of Darwin’s theory of evolution, Taylor takes us on an exciting and challenging journey that begins to answer the fundamental question about our existence: what makes humans unique, and what does that mean for our future?

In an interview in the New Scientist, Timothy Taylor gives more details of his ideas:

Upright female hominins walking the savannah had a real problem: their babies couldn’t cling to them the way a chimp baby could cling to its mother. Carrying an infant would have been the highest drain on energy for a hominin female – higher than lactation. So what did they do? I believe they figured out how to carry their newborns using a loop of animal tissue. Evidence of the slings hasn’t survived, but in the same way that we infer lungs and organs from the bones of fossils that survive, it is from the stone tools that we can infer the bits that don’t last: things made from sinew, wood, leather and grasses…

Once you have slings to carry babies, you have broken a glass ceiling – it doesn’t matter whether the infant is helpless for a day, a month or a year. You can have ever more helpless young and that, as far as I can see, is how encephalisation took place in the genus Homo. We used technology to turn ourselves into kangaroos. Our children are born more and more underdeveloped because they can continue to develop outside the womb – they become an extra-uterine fetus in the sling. This means their heads can continue to grow after birth, solving the smart biped paradox. In that sense technology comes before the ascent to Homo. Our brain expansion only really took off half a million years after the first stone tools. And they continued to develop within an increasingly technological environment…

I’ve ordered Taylor’s book from Amazon and I expect it to be waiting for me at my home in the UK once I return from my current trip in Asia.  I’m also looking forward to hosting a discussion meeting on Saturday 11th Sept under the auspices of Humanity+ UK in London, where Timothy Taylor himself will be the main speaker. People on Facebook can register their interest in this meeting by RSVPing here.  There’s no charge to attend.

Another option to see Timothy Taylor lecture in person – for those able to spare time in the middle of the day on a Thursday (9th Sept) – will be at the RSA.  I expect there will be good discussion at both events, but the session at H+UK is longer (two hours, as opposed to just one at the RSA), and I expect more questions there about matters such as the likely role of technology radically re-shaping the future development of humans.

Footnote: of course, the fact that evolution guided our ancestors to behave in certain ways is no reason for us to want to continue to behave in these ways.  But understanding the former is, in my view, very useful background knowledge for being to devise practical measures to change ourselves.

1 February 2010

On the undue adulation for ‘You are not a gadget’

Filed under: books, collaboration, Open Source — David Wood @ 12:46 pm

Perhaps the most disturbing thing about Jaron Lanier’s new book “You are not a gadget: a manifesto” is the undue adulation it has received.

For example, here’s what eminent theoretical physicist Lee Smolin says about the book (on its back cover):

Jaron Lanier’s long-awaited book is fabulous – I couldn’t put it down and shouted out Yes! Yes! on many pages.

Smolin goes on:

Lanier is a rare voice of sanity in the debate about the relationship between computers and we human beings.  He convincingly shows us that the idea of digital computers having human-like intelligence is a fantasy.

However, when I read it, far from shouting out Yes! Yes! on many pages, the thoughts that repeatedly came to my mind were: No! No! What a misunderstanding! What a ridiculous straw man! How poor! How misleading!

The titles of reviews of Lanier’s book on Amazon.com show lots more adulation:

  • A brilliant work of Pragmatic “Techno-Philosophy” (a five-star review)
  • Thought provoking and worthy of your time (ditto)
  • One of the best books in a long while (ditto)
  • A tribute to humanity (ditto)

That last title indicates what is probably going on.  Many people feel uneasy that “humanity” is seemingly being stretched, trampled, lost, and reduced, by current changes in our society – including the migration of so much culture online, and the increasing ubiquity of silicon brains.  So they are ready to clutch at straws, with the hope of somehow reaffirming a more natural state of humanity.

But this is a bad straw to clutch at.

Interestingly, even one of the five star reviews has to remark that there are significant mistakes in Lanier’s account:

While my review remains positive, I want to point out one major problem in the book. The account of events on p. 125-126 is full of misinformation and errors. The LISP machine in retrospect was a horrible idea. It died because the RISC and MIPS CPU efforts on the west coast were a much better idea. Putting high-level software (LISP) into electronics was a bad idea.

Stallman’s disfunctional relationship with Symbolics is badly misrepresented. Stallman’s licence was not the first or only free software licence…

My own list of the misinformation and errors in this book would occupy many pages.  Here’s just a snippet:

1. The iPhone and UNIX

Initially, I liked Lanier’s account of the problems caused by lock-in.  But then (page 12) he complains, incredibly, that some UI problems on the iPhone are due to the fact that the operating system on the iPhone has had to retain features from UNIX:

I have an iPhone in my pocket, and sure enough, the thing has what is essentially UNIX in it.  An unnerving element of this gadget is that it is haunted by a weird set of unpredictable user interface delays.  One’s mind waits for the response to the press of a virtual button, but it doesn’t come for a while.  An odd tension builds during that moment, and easy intuition is replaced by nervousness.  It is the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years.

As someone who has been involved for more than 20 years with platforms that enable UI experience, I can state categorically that delays in UI can be addressed at many levels.  It is absurb to suggest that a hangover from UNIX days means that all UIs on mobile devices (such as the iPhone) are bound to suffer unnerving delays.

2. Obssession with anonymous posters

Time and again Lanier laments that people are encouraged to post anonymously to the Internet.  Because people have to become anonymous, they are de-humanised.

My reaction:

  • It is useful that the opportunity for anonymous posting exists;
  • However, in the vast bulk of the discussions in which I participate, most people sign their names, and links are available to their profiles;
  • Rather than a sea of anonymous interactions, there’s a sea of individuals ready to become better known, each with their own fascinating quirks and strengths.

3. Lanier’s diatribe against auto-layout features in Microsoft Word

Lanier admits (page 27) that he is “all for the automation of petty tasks” by software.  But (like most of us) he’s had the experience where Microsoft Word makes a wrong decision about an automation it presumes we want to do:

You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline…

This type of design feature is nonsense, since you end up having to do more work than you would otherwise in order to manipulate the software’s expectations of you.

Most people would say this just shows that there are still bugs in the (often useful) auto-layout feature.  Not so Lanier.  Instead, incredibly, he imputes a sinister motivation onto the software’s designers:

The real [underlying] function of the feature isn’t to make life easier for people.  Instead, it promotes a new philosophy: that the computer is evolving into a life-form that can understand people better than people can understand themselves.

Lanier insists there’s a dichotomy: either a software designer is trying to make tasks easier for users, or the software designer has views that computers will, one day, be smarter than humans.  Why would the latter view (if held) mean the former cannot also be true? And why is “this type of design feature” nonsense?

4. Analysis of Alan Turing

Lanier’s analysis (and psycho-analysis) of AI pioneer Alan Turing is particularly cringe-worthy, and was the point where, for me, the book lost all credibility.

For example, Lanier tries to score points against Turing by commenting (page 31) that:

Turing’s 1950 paper on the test includes this extraordinary passage: “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates”.

However, referring to the context (Turing’s paper is available online here) indicates that Turing is, in the quoted passage, in the midst of seeking to engage with a number of different objections to his main hypothesis.  Each time, he seeks to enter into the mindset of people who might oppose his thinking.  This extract is from the section “The Theological Objection”.  Immediately after the section highlighted by Lanier, Turing’s paper goes on to comment:

However, this is mere speculation. I am not very impressed with theological arguments whatever they may be used to support. Such arguments have often been found unsatisfactory in the past. In the time of Galileo it was argued that the texts, “And the sun stood still . . . and hasted not to go down about a whole day” (Joshua x. 13) and “He laid the foundations of the earth, that it should not move at any time” (Psalm cv. 5) were an adequate refutation of the Copernican theory. With our present knowledge such an argument appears futile. When that knowledge was not available it made a quite different impression.

Given a choice between the analytic powers of Turing and those of Lanier, I would pick Turing very nearly 100% of the time.

5. Clay Shirky and the latent cognitive surplus

Lanier’s treatment of Clay Shirky’s ideas is equally deplorable – sleight of hand again distorts the original message.  It starts off fine, with Lanier quoting an April 2008 article by Shirky:

And this is the other thing about the size of the cognitive surplus we’re talking about. It’s so large that even a small change could have huge ramifications. Let’s say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That’s about five times the size of the annual U.S. consumption. One per cent of that  is 100 Wikipedia projects per year worth of participation.

I think that’s going to be a big deal. Don’t you?

In Shirky’s view, there’s lots of time available for people to apply to creative tasks, if only they would spend less time watching sitcoms on TV.  Lanier pokes nauseasting fun at this suggestion, but only (page 49) by means of changing the time available into “seconds of salvaged” time.  (Who mentioned seconds?  Surely Shirky is talking about people applying themselves for longer than seconds at a time.)  Lanier labours his point with a ridiculous hyperbole:

How many seconds of salvaged erstwhile television time would need to be harnessed to replicate the achievements of, say, Albert Einstein?  It seems to me that even if we could network all the potential aliens in the galaxy – quadrillions of them, perhaps – and get each of them to contribute some seconds to a physics wiki, we would not replicate the achievements of even one mediocre physicist, much less a great one.

6. Friends and Facebook friends

Lanier really seems to believe (page 53) that people who use Facebook cannot distinguish between “Facebook friends” and “real world friends”.  He should talk more often to people who use Facebook, to see if they really are so “reduced” as he implies.

7. Lack of appreciation for security researchers

Lanier also rails (page 65) against people who investigate potential security vulnerabilities in software systems.

It seems he would prefer us all to live in ignorance about these potential vulnerabilities.

8. The Long Tail and individuals

Lanier cannot resist an ill-warranted attack on the notion of the long tail.  Describing a proposal of his own for how authors and artists could be rewarded for Internet usage of their material, Lanier makes the bizarre comment (page 101):

Note that this is a very different idea from the long tail, because it rewards individuals rather than cloud owners

Where did the assumption come from that writers who describe the Long Tail are only interested in rewarding “cloud owners” such as Amazon and Google?

9. All generations from Generation X onwards are somnolent

Lanier bemoans the blandness of the youth (page 128):

At the time that the web was born, in the early 1990s, a popular trope was that a new generation of teenagers, raised in the conservative Reagan years, had turned out exceptionally bland.  The members of “Generation X” were characterised as blank and inert.  The anthropologist Steve Barnett compared them to pattern exhaustion, a phenonemon in which a culture runs out of variations of traditional designs in their pottery and becomes less creative.

A common rationalisation is the fledgling world of digital culture back then is that we were entering a transitional lull before a creative storm – or were already in the eye of one.  But the sad truth is that we were not passing through a momentary lull before a storm.  We had instead entered a persistent somnolence, and I have come to believe that we will only escape it when we kill the hive.

My experience is at radical odds with this.  Through my encounters with year after year of graduate recruit intake at Symbian, I found many examples, each year, of youth full of passion, verve, and creativity.

The cloud which Lanier fears so much doesn’t stifle curiosity and creativity, but provides many means for people to develop a fuller human potential.

10. Open Source and creativity

Lanier complains that Open Source – and, more generally, Web 2.0 collaborative processes – has failed to produce anything of real value.  All it can do, he says (page 122 – and repeated numerous times elsewhere), is to imitate: Linux is a copy of UNIX and Wikipedia is a copy of Encyclopaedia Britannica.

But what about the UI creativity of Firefox (an open source web browser, that introduced new features ahead of the Microsoft alternative)?

How about the creativity of many of the applications on mobile devices, such as the iPhone, that demonstrate mashups of information from diverse sources (including location-based information).

Even to say that Wikipedia is derivative from Britannica misses the point, of course, that material in Wikipedia is updated so quickly.  Yes, there’s occasional unreliability, but people soon learn how to cross-check it.

It goes on…

For each point I’ve picked out above, there are many others I could have shared as well.

Lanier is speaking this evening (Monday 1st February) at London’s RSA.  The audience is usually respectful, but can ask searching questions.  This evening, if the lecture follows the same lines as the book, I expect to see more objections than usual.  However, I also expect there will be some in the audience who jump at the chance to defend humanity from the perceived incursions from computers and AI.

For a wider set of ojbections to Lanier’s ideas – generally expressed much more politely than my comments above – see this compendium from Edge.

My own bottom line view is that technology will significantly enhance human experience and creativity, rather than detract from it.

To be clear, I accept that there are good criticisms that can be made of the excesses of Web 2.0, open source, and so on.  For example, the second half of Nick Carr’s book “The Big Switch: Rewiring the World, from Edison to Google” is a good start.  (Andrew Orlowski produced an excellent review of Carr’s book, here.)  Lanier’s book is not a good contribution.

24 December 2009

Predictions for the decade ahead

Before highlighting some likely key trends for the decade ahead – the 2010’s – let’s pause a moment to review some of the most important developments of the last ten years.

  • Technologically, the 00’s were characterised by huge steps forwards with social computing (“web 2.0”) and with mobile computing (smartphones and more);
  • Geopolitically, the biggest news has been the ascent of China to becoming the world’s #2 superpower;
  • Socioeconomically, the world is reaching a deeper realisation that current patterns of consumption cannot be sustained (without major changes), and that the foundations of free-market economics are more fragile than was previously widely thought to be the case;
  • Culturally and ideologically, the threat of militant Jihad, potentially linked to dreadful weaponry, has given the world plenty to think about.

Looking ahead, the 10’s will very probably see the following major developments:

  • Nanotechnology will progress in leaps and bounds, enabling increasingly systematic control, assembling, and reprogamming of matter at the molecular level;
  • In parallel, AI (artificial intelligence) will rapidly become smarter and more pervasive, and will be manifest in increasingly intelligent robots, electronic guides, search assistants, navigators, drivers, negotiators, translators, and so on.

We can say, therefore, that the 2010’s will be the decade of nanotechnology and AI.

We’ll see the following applications of nanotechnology and AI:

  • Energy harvesting, storage, and distribution (including via smart grids) will be revolutionised;
  • Reliance on existing means of oil production will diminish, being replaced by greener energy sources, such as next-generation solar power;
  • Synthetic biology will become increasingly commonplace – newly designed living cells and organisms that have been crafted to address human, social, and environmental need;
  • Medicine will provide more and more new forms of treatment, that are less invasive and more comprehensive than before, using compounds closely tailored to the specific biological needs of individual patients;
  • Software-as-a-service, provided via next-generation cloud computing, will become more and more powerful;
  • Experience of virtual worlds – for the purposes of commerce, education, entertainment, and self-realisation – will become extraordinarily rich and stimulating;
  • Individuals who can make wise use of these technological developments will end up significantly cognitively enhanced.

In the world of politics, we’ll see more leaders who combine toughness with openness and a collaborative spirit.  The awkward international institutions from the 00’s will either reform themselves, or will be superseded and surpassed by newer, more informal, more robust and effective institutions, that draw a lot of inspiration from emerging best practice in open source and social networking.

But perhaps the most important change is one I haven’t mentioned yet.  It’s a growing change of attitude, towards the question of the role in technology in enabling fuller human potential.

Instead of people decrying “technical fixes” and “loss of nature”, we’ll increasingly hear widespread praise for what can be accomplished by thoughtful development and deployment of technology.  As technology is seen to be able to provide unprecedented levels of health, vitality, creativity, longevity, autonomy, and all-round experience, society will demand a reprioritisation of resource allocation.  Previous sacrosanct cultural norms will fall under intense scrutiny, and many age-old beliefs and practices will fade away.  Young and old alike will move to embrace these more positive and constructive attitudes towards technology, human progress, and a radical reconsideration of how human potential can be fulfilled.

By the way, there’s a name for this mental attitude.  It’s “transhumanism”, often abbreviated H+.

My conclusion, therefore, is that the 2010’s will be the decade of nanotechnology, AI, and H+.

As for the question of which countries (or regions) will play the role of superpowers in 2020: it’s too early to say.

Footnote: Of course, there are major possible risks from the deployment of nanotechnology and AI, as well as major possible benefits.  Discussion of how to realise the benefits without falling foul of the risks will be a major feature of public discourse in the decade ahead.

6 December 2009

The art of community

Filed under: books, catalysts, collaboration, ecosystem management — David Wood @ 8:42 pm

A PDF version of the presentation I gave last Thursday to a meeting of the Software/Open Source SIG of the Cambridge Wireless Network, “Open ecosystems – Communities that build the future“, is now available for download from the resources page of the Cambridge Wireless website.

The overall contents of my presentation are introduced by the text from slide 2:

Slide 12 provides a summary of the second half of my presentation

Someone who clearly shares my belief in the importance of community, and in the fact that there are key management skills that need to be brought to bear to get the best out of the potential of a community, is Jono Bacon, who works at Canonical as the Ubuntu Community Manager.  Jono’s recent book, “The art of community: building the new age of participation” has been widely praised – deservedly so.

The whole book is available online for free download.

Over the course of 11 chapters spanning 360 pages, Jono provides a host of practical advice about how to best cultivate a community.  Although many of the examples he provides are rooted in the world of open source software (and, in particular, the community which supports the Ubuntu distribution of Linux), the principles generally apply far more widely – to all sorts of communities, particularly communities with a significant online presence and significant numbers of volunteers.  To quote from the preface:

The Art of Community is not specifically focused on computing communities, and the vast majority of its content is useful for anything from political groups to digital rights to knitting and beyond.

Within this wide range of possible communities, this book will be useful for a range of readers:

  • Professional community managers – If you work in the area of community management professionally
  • Volunteers and community leaders – If you want to build a strong and vibrant community for your volunteer project
  • Commercial organizations – If you want to work with, interact with, or build a community around your product or service
  • Open source developers – If you want to build a successful project, manage contributors, and build buzz
  • Marketeers – If you want to learn about viral marketing and building a following around a product or service
  • Activists – If you want to get people excited about your cause

Every chapter in this book is applicable to each of these roles. While technology communities provide many examples throughout the book, the purpose of these examples requires little technical knowledge.

I’ve just finished reading all 360 pages.  Each new chapter introduces important new principles and techniques.  I was reading the book for three reasons:

  1. To compare ideas about the best way to run parts of an open source software community (as used to be part of my responsibilities at the Symbian Foundation);
  2. To get ideas about how to boost the emerging community of people who share my interest in the “Humanity Plus” ideas covered in some of my other blog postings;
  3. To consider the possible wider role of well-catalysed communities to address the bigger challenges and opportunities facing society at the present time;

The book succeeded, for me, on all three levels.  Parts that I particularly liked included:

  • The importance of establishing a compelling mission statement for a community (Chapter 2)
  • Tips on building simple, effective, and nonbureaucratic processes that enable your community to conduct tasks, work together, and share their successes (Chapter 4)
  • How to build excitement and buzz around your community – and some telling examples of how not to do this (Chapter 6)
  • The importance of open and transparent community governance principles – and some reasons for occasionally limiting openness (Chapter 8)
  • Guidance on how to identify, handle, and prevent irksome conflict (ahead of time, if possible), and on dealing with divisive personalities (Chapter 9)
  • Ideas on running events – where (if done right) the “community” feeling can deepen to something more akin to “family” (Chapter 10).

(This blogpost contains an extended table of contents for Jono’s book.  And see here for a short video of Jono describing his book.)

The very end of the book mentions an annual conference called “The community leadership summit”.  To quote from the event website:

Take the microphone and join experienced community leaders and organizers to discuss, debate and explore the many avenues of building strong community in an open unconference setting, complimented by additional structured presentations.

I’m attracted by the idea of participating in the 2010 version of that summit 🙂

7 March 2009

What have operators done for us recently?

Filed under: collaboration, innovation, Mobile Monday, operators — David Wood @ 2:13 pm

Mobile Monday in London this Monday evening (9th March) will be on the topic of “What have operators done for us recently?”.

To quote from the event website,

What have mobile operators done for innovators and developers, lately? Our next MobileMonday London event will explore this issue. The event will be held on March 9th at CBI conference centre (at Centrepoint Tower) at 6:00 pm, sponsored by O2 Litmus and Vodafone. Panelists will include James Parton from O2, Terence Eden from Vodafone, Steve Wolak from Betavine, David Wood from Symbian Foundation and Jo Rabin representing dotMobi. The event will be chaired by Anna Gudmundson from AdIQ and Dan Appelquist will be your host for the evening.

At the time of writing, there are still a few registration slots left. If you’re in or around London on Monday evening, and you’re at all interested in the future of the mobile phone industry, you will almost certainly find the meeting worthwhile. From my past experience, these events are great for networking as well as for highlighting ideas and sharply debugging them. The breadth and depth of experience in the room mean that any superficially attractive proclamations from panellists are quickly challenged. I typically leave these meetings wiser than when I went in (and often chastened, too).

Usually people blog meetings after they happen (or whilst they are happening). In this case, I’d like to set down a few thoughts in advance.

Early last year, Symbian commissioned a third party report into the viewpoints and experiences of mobile developers. The report had a Californian bias but the results are familiar even in the context of Europe. The report did not specifically seek out the opinions of developers towards network operators, but these opinions came through loud and clear regardless. Here are some representative comments:

  • “Everyone in tech has rope burns around their necks from doing business with the carriers [network operators]. They hung themselves trying to do carrier deals.”
  • “The operator is an adversary, not a partner.”
  • “The basic problem with mobile is that operators are in the way.”
  • “The reality is that the mobile operators will screw you, unless they already want to do what you’re developing. They always ask, ‘What’s in it for me?'”

I raise these comments here, not because I endorse them, but because they articulated a set of opinions that seem to be widely held, roughly twelve months ago.

Operators are (of course!) aware of these perceptions too, and are seeking to address these concerns. At the Mobile Monday meeting, we’ll have a chance to evaluate progress.

Ahead of the meeting, I offer the following six points for consideration:

1: With their widespread high bandwidth coverage, the wireless networks are a modern-day technological marvel – perhaps one of the seven wonders of the present era. These networks need maintenance and care. For this reason, network operators are justified in seeking to protect access to this resource. If these resources become flooded with too much video transfer, manic automated messaging, or deleterious malware, we will all be the losers as a result.

2: Having invested very considerably in the build-up of these networks, it is completely reasonable for operators to seek to protect a significant revenue flow from the utilisation of these networks – especially from core product lines such as voice and SMS. Anything that risks destroying this revenue flow is bound to cause alarm.

3: The potential upside of new revenue flow from innovative new data services often seems dwarfed by the potential downside from loss of revenues from existing services, if networks are opened too freely to new players. In other words, network operators all face a case of the Innovators’ Dilemma. When it comes to the strategic crunch, innovative new business potential often loses out to maintaining the existing lines of business.

4. New lines of revenue for operators – to supplement the old faithfuls of voice and SMS – include the following:

  • Straightforward data usage charges;
  • A micro-share of monetary transactions (such as mobile banking, or goods being bought or sold or advertised) that are carried out over wireless network;
  • Reliable provision of high-quality services (such as would support crystal-clear telephone conference calls);
  • Premium charges for personalised services (such as answers to searches or enquiries)
  • A share of the financial savings that companies can achieve through efficiency gains from the intelligent deployment of new mobile services; etc.

But in all cases, the evolution of these new lines of service is likely be faster and more successful, if new entrepreneurs and innovators can be involved and feel welcome.

5. The best step to involving more innovators in the development of commercially significant new revenues – and to solving the case of the Innovators Dilemma mentioned above – is to systematically identify and analyse and (as far as possible) eliminate all cases of friction in the existing mobile ecosystem.

6. Three instances of mobile ecosystem friction stand out:

  • The diversity (fragmentation) of different operator developer support programmes. Developers have to invest considerable effort in joining and participating in each different scheme. Why can’t there more greater commonality between these programmes?
  • The hurdles involved with getting sophisticated applications approved for usage on networks and/or handsets – developers often feel that they are being forced to go through overly-onerous third party testing and verification hoops, in order to prove that their applications are trustworthy. Some element of verification is probably inevitable, but can’t we find ways to streamline it?
  • The difficulties consumers face in finding and then installing and using applications that are reliably meet their expectations.

In all cases, it’s my view that a collaborative approach is more likely to deliver lasting value to the industry than a series of individualist approaches.

9 February 2009

Preparing for Barcelona

Filed under: challenge, collaboration, Open Source, party — David Wood @ 11:22 pm

What’s the issue that deserves the fullest attention of the best minds of the mobile industry representatives who will be gathering next week at the Mobile World Congress event in Barcelona?

That was one of the questions that I got asked in a quick-fire practice session last week, by a journalist who was employed for the morning to take part in a “media training session” for people from the Symbian Foundation launch team. The idea of the session was to bombard participants with potentially awkward questions, so we could test out various ways to respond. The questions ranged from tame to taxing, from straightforward to subtle, and from respectful to riotous.

One possible answer to the question at the top of this posting is that it is the issue of user experience which deserves the fullest attention. If users continue to be confronted by inflexible technology with unfriendly interfaces, they won’t get drawn in to make fullest use of mobile devices and services.

Another possible answer is that it is the issue of complexity which deserves the fullest attention. In this line of thinking, overly complex UIs are just one facet of the problem of overly complex mobile technology. Other facets include:

  • Overly difficult development cycles (resulting in products coming late to the market, and/or products released with too many defects), and
  • Overly exercised CPU cores and overly bloated software (resulting in products with poor battery life and high cost).

However, on reflection, I offer instead the following answer: it is the issue of collaboration which deserves the fullest attention. We need to find better ways for all the good resources of the mobile industry to be productively aligned addressing the same key risks and opportunities, rather than our good intentions ending up pulling in different directions. The problems that we collectively face (including the problems of poor user experience and overly complex software) are surely capable of resolution, if only we can find the way to work together on solutions, rather than our different approaches ending up contradiciting each other and confusing matters.

Open source, whereby people can look at source code and propose changes without having to gain special permission in advance, is part of the solution to improving collaboration. Open discussion and open governance take the solution further. Yet another step comes from collaboration tools that provide first-rate source configuration management and issue tracking.

But collaboration often needs clear leadership to make it a reality: a sufficiently compelling starting point on which further collaboration can take place. Without such a starting point, none of the other items I mentioned can hope to make a lasting difference.

That brings me back to the role of the Symbian Foundation. The Symbian Foundation is offering the entire mobile industry what it claims to be the best possible starting point for further collaboration:

  • A tried and tested codebase of 40 million lines of code;
  • Processes and disciplines that cope with pressures from multiple divergent stakeholders;
  • A visionary roadmap that is informed by the thinking of existing mobile leaders, and which spells out the likely evolution of key mobile technologies.

The Symbian Foundation will be holding a welcome party on Monday evening at Barcelona (8pm-11pm, 16th February). I’ve been allocated a small number of tickets to this party, to pass to selected bloggers, analysts, and other deep thinkers of the mobile industry. If you’d like to join this party to discuss the points I’ve made in this posting (or any of the other issues relevant to the formation of the Symbian Foundation), I set you this challenge. Please drop me an email, and provide a link to some of your online writings on applicable topics. (By the way, you don’t need to agree with my viewpoint. You just need to demonstrate that you’re going to enter into an open-minded, friendly, and constructive debate!)

15 December 2008

Symbian Signed basics

Filed under: collaboration, developer experience, operators, Symbian Signed — David Wood @ 9:19 am

It’s not just Symbian that runs into some criticism over the operation of application certification and signing programs. (See eg the discussion on “Rogue Android apps rack up hidden charges“.)

This is an area where there ought ideally to be a pooling of insights and best practice across the mobile industry.

On the other hand, there are plenty of conflicting views about what’s best:

  • “Make my network more secure? Yes, please!”
  • “Make it easier to develop and deploy applications? Yes, please!”

If we go back to basics, what are the underlying requirements that lead to the existence of application certification and signing schemes? I append a list of potential requirements. I’ll welcome feedback on the importance of various items on this list.

Note: I realise that many requirements in this list are not addressed by the current schemes.

a. Avoiding users suffering from malware

To avoid situations where users suffer at the hands of malware. By “malware”, I mean badly behaved software (whether the software is intentionally or unintentionally badly behaved).

Examples of users suffering from malware include:

  1. Unexpectedly high telephone bills
  2. Unexpectedly low battery life
  3. Inability to make or receive phone calls
  4. Leakage without approval of personal information such as contacts, agenda, or location
  5. Corruption of personal information such as contacts, agenda, or location
  6. Leaving garbage or clutter behind on the handset, when the software is uninstalled
  7. Interference with the operation of other applications, or other impact to handset performance.

b. Establishing user confidence in applications

To give users confidence that the applications they install will add to the value of the handset rather than detract from it.

c. Reducing the prevalence of cracked software

To make it less likely that users will install “cracked” free versions of commercial applications written by third parties, thereby depriving these third parties of income.

d. Avoiding resource-intensive virus scanners

To avoid mobile phones ending up needing to run the same kind of resource-intensive virus scanners that are common (and widely unloved) on PCs.

e. Avoiding networks suffering from malware

To avoid situations where network operators suffer at the hands of malware or unrestricted add-on applications. Examples of network operators suffering from such software include:

  1. Having to allocate support personnel for users who encounter malware on their handsets
  2. The network being overwhelmed as a result of data-intensive applications
  3. Reprogrammed cellular data stacks behaving in ways that threaten the integrity of the wireless network and thereby invalidate the FCC (or similar) approval of the handset
  4. DRM copy protected material, provided or distributed by the network operator, being accessed or copied by third party software in ways that violate the terms of the DRM licence
  5. Revenue opportunities for network operators being lost due to alternative lower-cost third party applications being available.

f. Keeping networks open

To prevent network operators from imposing a blanket rule against all third party applications, which would in turn:

  • Limit the innovation opportunities for third party developers
  • Limit the appearance of genuinely useful third party applications.

g. Avoiding fragmentation of signing schemes

To avoid network operators from all implementing their own application certification and approval schemes, thereby significantly multiplying the effort required by third party developers to make their applications widely available; far better, therefore, for the Symbian world to agree on a single certification and approval mechanism, namely Symbian Signed.

28 November 2008

Why can’t we all just get along?

Filed under: collaboration, compact framework, fragmentation, runtimes — David Wood @ 8:58 pm

Blogger Tomaž Štolfa asks me, in a comment to one of my previous posts,

I am also wondering why you are not trying to explore a non-os specific scenario?

Developers and service designers do not want to be bound to a single platform when developing a service for the masses. So it would make much more sense to se a bright future with cross-platform standards set by an independent party (W3C?).

If the industry will not agree on standards quickly enough Adobe (or some other company) will provide their own.

It’s a good question. I’m actually a huge fan of multi-platform standards. Here’s just a few of many examples:

  • Symbian included an implementation of Java way back in v4 of Symbian OS (except that the OS was called “EPOC Release 4” at the time);
  • Symbian was a founder member of the Open Mobile Alliance – and I personally served twice on the OMA Board of Directors;
  • I have high hopes for initiatives such as OMTP’s BONDI that is seeking to extend the usefulness of web methods on mobile devices.

Another example of a programming method that can be applied on several different mobile operating systems is Microsoft’s .NET compact framework. Take a look at this recent Microsoft TechEd video in which Andy Wigley of Appa Mundi interviews Mike Welham, CTO of Red Five Labs, about the Red Five Labs Net60 solution that allows compact framework applications to run, not only on Windows Mobile, but also on S60 devices.

There’s no doubt in my mind that, over time, some of these intermediate platforms will become more and more powerful – and more and more useful. The industry will see increasing benefits from agreeing and championing fit-for-purpose standards for application environments.

But there’s a catch. The catch applies, not to the domain of add-on after market solutions, but to the domain of device creation.

Lots of the software involved in device creation cannot be written in these intermediate platforms. Instead, native programming is required – and involves exposure to the underlying operating system. That’s when the inconsistencies at the level of native operating systems become more significant:

  • Differences between clearly different operating systems (eg Linux vs. Windows Mobile vs. Symbian OS);
  • Differences between different headline versions of the same operating system (eg Symbian OS v8 vs. Symbian OS v9);
  • Differences between different flavours of the same operating system, evolved by different customers (eg Symbian OS v7.0 vs. Symbian OS v7.0s);
  • Differences between different customisations of the same operating system, etc, etc.

(Note: I’ve used Symbian OS for most of these examples, but it’s no secret that the Mobile Linux world has considerably more internal fragmentation than Symbian. The integration delays in that world are at least as bad.)

From my own experience, I’ve seen many device creation projects very significantly delayed as a result of software developers encountering nasty subtle differences between the native operating systems on different devices. Product quality suffered as a result of these project schedule slips. The first loser was the customer, on encountering defects or a poor user experience. The second loser was the phone manufacturer.

This is a vexed problem that cannot be solved simply by developing better multi-os standard programming environments. Instead, I see the following as needed:

  1. Improved software development tools, that alert systems integrators more quickly to the likely causes of unexpected instability or poor performance on phones (including those problems which have their roots in unexpected differences in system behaviour); along this line, Symbian has recently seen improvements in our own projects from uses of the visual tools included in the Symbian Analysis Workbench;
  2. A restructuring of the code that runs on the device in order to allow more of that code to be written in standard managed code environments – Symbian’s new Freeview architecture for networking IP is one step in this direction;
  3. Where possible, APIs used by aspects of the different native operating systems should become more and more similar – for example, I like to imagine that, one day, the same device driver will be able to run on more than one native operating system
  4. And, to be frank, we need fewer native operating systems; this is a problem that will be solved over the next couple of years as the industry gains more confidence in the overall goodness of a small number of the many existing mobile operating systems.

The question of technical fragmentation is, of course, only one cause of needless extra effort having to be exerted within the mobile industry. Another big cause is that different players in the value chain are constantly facing temptation to try to grab elements of value from adjacent players. Hence, for example, the constant tension between network operators and phone manufacturers.

Some elements of this tension are healthy. But, just as for the question of technical fragmentation, my judgement is that the balance is considerably too far over to the “compete” side of the spectrum rather than the “cooperate” side.

That’s the topic I was discussing a few months back with Adam Shaw, one of the conference producers from Informa, who was seeking ideas for panels for the “MAPOS ’08” event that will be taking place 9-10 December in London. Out of this conversation, Adam came up with the provocative panel title, “Can’t We All Just Get Along? Cooperation between operators and suppliers”. Here’s hoping for a constructive dialog!

19 November 2008

New mobile OSes mean development nightmares

Filed under: collaboration, fragmentation, Future of Mobile, innovation — David Wood @ 11:30 pm

Over on TechRadar, Dan Grabham has commented on one of the themes from Monday’s Future of Mobile event in the Great Hall in High Street Kensington, London:

The increase in mobile platforms caused by the advent of the Apple iPhone and Google’s Android are posing greater challenges for those who develop for mobile. That was one of the main underlying themes of this week’s Future of Mobile conference in London.

Tom Hume, Managing Director of developer Future Platforms, picked up on this theme, saying that from a development point of view things were more fragmented. “It’s clear that it’s an issue for the industry. I think it’s actually got worse in the last year or so.”

Indeed, many of the panellists representing the major OS vendors said that they expected some kind of consolidation over the coming years as completion in the mobile market becomes ever fiercer.

The theme of collaboration vs. competition was one that I covered in my own opening remarks on this panel. Before the conference, the panel chairman, Simon Rockman of Sony Ericsson, had asked the panellists to prepare a five minute intro. I’ll end this posting with a copy of what I prepared.

Before that, however, I have another comment on the event. One thing that struck me was the candid comments from many of the participants about the dreadful user experience that mobile phones deliver. So the mobile industry has no grounds for feeling pleased with itself! This was particularly emphasised during the rapid-fire “bloggers 6×6 panel”, which you can read more about from Helen Keegan’s posting – provocatively entitled “There is no future of mobile”. By the way, Helen was one of the more restrained of that panel!

So, back to my own remarks – where I intended to emphasise that, indeed, we face hard problems within our industry, and need new solutions:

This conference is called the Future of Mobile – not the Present Day of Mobile – so what I want to talk about is developments in mobile operating systems that will allow the mobile devices and mobile services of, say, 5 years time – 2013 – to live up to their full potential.

I believe that the mobile phones of 2013 will make even the most wonderful phones of today look, in comparison, jaded, weak, slow, and clunky. It’s my expectation that the phones used at that time, not just by technology enthusiasts and early adopters, but also by mainstream consumers, will be very considerably more powerful, more functional, more enchanting, more useful, more valuable, and more captivating than today’s smartphones.

To get there is going to require a huge amount of sophisticated and powerful software to be developed. That’s an enormous task. To get there, I offer you three contrasts.

The first contrast is between cooperation and competition.

The press often tries to portray some kind of monster, dramatic battle of mobile operating systems. In this battle, the people sitting around this table are fierce competitors. It’s the kind of thing that might sell newspapers. But rather than competition, I’m more interested in collaboration. The problems that have to be solved, to create the best possible mobile phone experiences of the next few years, will require cooperation between the people in the companies and organisations represented around this table – as well as with people in those companies and organisations that don’t have seats here at this moment, but which also play in our field. Instead of all of us working at odds with each other, spreading our energies thinly, creating incomplete semi-satisfactory solutions that are at odds with each, it would be far better for us to pool more of our energies and ideas.

I’m not saying that all competition should be stopped – far from it. An element of competition is vital, to prevent a market from becoming stale. But we’ve got too much of it just now. We’ve got too many operating systems that are competing with each other, and we’ve got different companies throughout the value chain competing with each other too strongly.

Where the industry needs to reach is around 3 or 4 major mobile operating systems – whereas today the number is somewhere closer to 20 – or closer to 200, if you count all the variants and value-chain complications. It’s a fragmentation nightmare, and a huge waste of effort.

As the industry consolidates over the next few years, I have no doubt that Symbian OS will be one of the small number of winning platforms. That brings me to my second contrast – the contrast between old and new – between past successes and future successes.

Last year, Symbian was the third most profitable software company in the UK. We earned licensing revenues of over 300 million dollars. We’ve been generating substantial cash for our owners. We’re in that situation because of having already shipped one quarter of a billion mobile phones running our software. There are at present some 159 different phone models, from 7 manufacturers, shipping on over 250 major operator networks worldwide. That’s our past success. It grows out of technology that’s been under development for 14 years, with parts of the design dating back 20 years.

But of course, past success is no guarantee of future success. I sometimes hear it said that Symbian OS is old, and therefore unsuited to the future. My reply is that many parts of Symbian OS are new. We keep on substantially improving it and refactoring it.

For example, we introduced a new kernel with enhanced real-time capabilities in version 8.1b. We introduced a substantial new platform security architecture in v9.0. More recently, there’s a new database architecture, a new Bluetooth implementation, and new architectures for IP networking and multi-surface graphics. We’re also on the point of releasing an important new library of so-called “high level” programming interfaces, to simplify developers’ experience with parts of the Symbian OS structure that sometimes pose difficulty – like text descriptors, active objects, and two-phase object construction and cleanup. So there’s plenty of innovation.

The really big news is that the pace of innovation is about to increase markedly – for three reasons, all tied up with the forthcoming creation of the Symbian Foundation:

  1. The first reason is a deeper and more effective collaboration between the engineering teams in Symbian and S60. This change is happening because of the acquisition of Symbian by Nokia. By working together more closely, innovations will reach the market more quickly.
  2. The second reason is because of a unification of UI systems in the Symbian space. Before, there were three UI systems – MOAP in Japan, UIQ, and S60. Now, given the increased flexibility of the latest S60 versions, the whole Symbian ecosystem will standardise on S60.
  3. The third reason is because of the transition of the Symbian platform – consisting of Symbian OS together with the S60 UI framework and applications – into open source. By adopting the best principles of open source, Symbian expects to attract many more developers than before to participate in reviewing and improving and creating new Symbian platform code. So there will be more innovation than before.

This brings me to the third of the three contrasts: openness vs. maturity.

Uniquely, the Symbian platform has a stable, well-tested, battle-hardened software base and software discipline, that copes well with the hard, hard task of large-scale software integration, handling input from many diverse and powerful customers.

Because of that, we’ll be able to cope with the flood of innovation that open source will send our way. That flood will lead to great progress for us, whereas for some other software systems, it will probably lead to chaos and fragmentation.

In summary, I see the Symbian platform as being not just one of several winners in the mobile operating system space, but actually the leading winner – and being the most widely used software platform on the planet, shipping in literally billions of great mobile devices. We’ll get there, because we’ll be at the heart of a huge community of impassioned and creative developers – the most vibrant developer ecosystem on the planet. Although the first ten years of Symbian’s history has seen many successes, the next ten years will be dramatically better.

Footnote: For other coverage of this event, see eg Tom Hume, Andrew Grill, Vero Pepperrell, Jemima Kiss, Dale Zak, and a very interesting Twitter channel (note to self: it’s time for me to stop resisting Twitter…)

31 October 2008

Watching Google watching the world

Filed under: books, collaboration, Google, infallibility, mobile data — David Wood @ 6:07 pm

If there was a prize for the best presentation at this week’s Informa “Handsets USA” forum in San Diego, it would have to go to Sumit Agarwal, Product Manager for Mobile from Google. Although there were several other very good talks there, Sumit’s was in a class of its own.

In the first place, Sumit had the chutzpah to run his slides directly on a mobile device – an iPhone – with a camera relaying the contents of the mobile screen to the video projector. Second, the presentation included a number of real-time demos – which worked well, and even the ways in which they failed to work perfectly became a source of more insight for the audience (I’ll come back to this point later). The demos were spread among a number of different mobile devices: an Android G1, the iPhone, and a BlackBerry Bold. (Sumit rather cheekily said that the main reason he carried the Bold was for circumstances in which the G1 and the iPhone run out of battery power.)

One reason the talk oozed authority was because Sumit could dig into actual statistics, collected on Google’s servers.

For example, the presentation included a graph showing the rate of Google search inquiries from mobile phones on different (anonymised) North American network operators. In September 2007, one of the lines started showing an astonishing rhythm, with rapid fluctuations in which the rate of mobile search inquiries jumped up sevenfold – before dropping down again a few days later. The pattern kept repeating, on a weekly basis. Google investigated, and found that the network operator in question had started an experiment with “free data weekends”: data usage would be free of charge on Saturday and Sunday. As Sumit pointed out:

  • The sharp usage spikes showed the latent demand of mobile users for carrying out search enquiries – a demand that was previously being inhibited by fear of high data charges;
  • Even more interesting, this line on the graph, whilst continuing to fluctuate drastically at weekends, also showed a gradual overall upwards curve, finishing up with data usage signifcantly higher than the national average, even away from weekends;
  • The takeaway message here is that “users get hooked on mobile data”: once they discover how valuable it can be to them, they use it more and more – provided (and this is the kicker) the user experience is good enough.

Another interesting statistic involved the requests received by Google’s servers for new “map tiles” to provide to Google maps applications. Sumit said that, every weekend, the demand from mobile devices for map tiles reaches the same level as the demand from fixed devices. Again, this is evidence of strong user interest for mobile services.

As regards the types of textual search queries received: Google classifies all incoming search queries, into categories such as sports, entertainment, news, and so on. Sumit showed spider graphs for the breakdown of search queries into categories. The surprising thing is that the spider graph for mobile-originated search enquiries had a very similar general shape to that for search enquiries from fixed devices. In other words, people seem to want to search for the same sorts of things – in the same proportion of times – regardless of whether they are using fixed devices or mobile ones.

It is by monitoring changes in server traffic that Google can determine the impacts of various changes in their applications – and decide where to prioritise their next efforts. For example, when the My Location feature was added to Google’s Mobile Maps application, it had a “stunning impact” on the usage of mobile maps. Apparently (though this could not be known in advance), many users are fascinated to track how their location updates in near real-time on map displays on their mobile devices. And this leads to greater usage of the Google Maps product.

Interspersed among the demos and the statistics, Sumit described elements of Google’s underlying philosophy for success with mobile services:

  • “Ignore the limitations of today”: don’t allow your thinking to be constrained by the shortcomings of present-day devices and networks;
  • “Navigate to where the puck will be”: have the confidence to prepare services that will flourish once the devices and networks improve;
  • “Arm users with the data to make decisions”: instead of limiting what users are allowed to do on their devices, provide them with information about what various applications and services will do, and leave it to the users to decide whether they will install and use individual applications;
  • “Dare to delight” the user, rather than always seeking to ensure order and predictability at all times;
  • “Accept downside”, when experiments occasionally go wrong.

As an example of this last point, there was an amusing moment during one of the (many) demos in the presentation, when two music-playing applications each played music at the same time. Sumit had just finished demoing the remarkable TuneWiki, which allows users to collaborate in supplying, sharing, and correcting lyrics to songs, for a Karaoke-like mobile experience without users having to endure the pain of incorrect lyrics. He next showed an application that searched on YouTube for videos matching a particular piece of music. But TuneWiki continued to play music through the phone speakers whilst the second application was also playing music. Result: audio overlap. Sumit commented that an alternative design philosophy by Google might have ensured that no such audio overlap could occur. But such a constraint would surely impede the wider flow of innovation in mobile applications.

And there was a piece of advice for application developers: “emphasise simplicity”. Sumit demoed the “AroundMe” application by TweakerSoft, as an illustration of how a single simple idea, well executed, can result in large numbers of downloads. (Sumit commented: “this app was written by a single developer … who has probably quintupled his annual income by doing this”.)

Google clearly have a lot going for them. Part of their success is no doubt down to the technical brilliance of their systems. The “emphasise simplicity” message has helped a great deal too. Perhaps their greatest asset is how they have been able to leverage all the statistics their enormous server farms have collected – not just statistics about links between websites, but also statistics about changes in user activity. By watching the world so closely, and by organising and analysing the information they find in it, Google are perhaps in a unique position to identify and improve new mobile services.

Just as Google has benefited from watching the world, the rest of the industry can benefit from watching Google. Happily, there’s already a great deal of information available about how Google operates. Anyone concerned about whether Google might eat their lunch can become considerably wiser from taking the time to read some of the fascinating books that have been written about both the successes and (yes) the failures of this company:

I finished reading the Stross book a couple of weeks ago. I found it an engrossing easy-to-read account of many up-to-date developments at Google. It confirms that Google remains an utterly intriguing company:

  • For example, one thought-provoking discussion was the one near the beginning of the book, about Google, Facebook, and open vs. closed;
  • I also valued the recurring theme of “algorithm-driven search” vs. “human-improved search”.

It was particularly interesting to read what Stross had to say about some of Google’s failures – eg Google Answers and Google Video (and arguably even YouTube), as a balance to its better-known string of successes. It’s a reminder that no company is infallible.

Throughout most of the first ten years of Symbian’s history, commentators kept suggesting that it was only a matter of time before the mightiest software company of that era – Microsoft – would sweep past Symbian in the mobile phone operating system space (and, indeed, would succeed – perhaps at the third attempt – in every area they targeted). Nowadays, commentators often suggest the same thing about Google’s Android solution.

Let’s wait and see. And in any case, I personally prefer to explore the collaboration route rather than the head-on compete route. Just as Microsoft’s services increasingly run well on Symbian phones, Google’s services can likewise flourish there.

« Newer PostsOlder Posts »

Blog at WordPress.com.