dw2

2 February 2010

Cutting edge computing science research for the real-world

Filed under: architecture, computer science, Google, universities — David Wood @ 11:54 pm

It is an amazing time in computer science.

This is a field that, while it is about 50 years old, has more opportunity today than it has ever had, by a large factor.

These were among the opening statements made by Google’s VP of Research & Special Initiatives, Alfred Z. Spector, in a colloquium a couple of weeks ago to the Computer Science & Engineering faculty of the University of Washington.  A video of the presentation is available from the University of Washington CSE website.

I previously mentioned this video at the tail end of a previous blogpost, “In praise of hybrid AI“.  The video is full of interesting comments about the direction of computer science.

As context, Spector mentioned “four application areas in flux today“:

  • publishing, education, healthcare, and government.

He also mentioned three “systems areas evolving“:

  • ubiquitous high performance networking, distributed computing, and new end-user devices.

This provided a prelude to “three truly big results brewing“:

  1. Totally transparent processing
  2. Ideal distributed computing
  3. Hybrid, not Artificial Intelligence

It’s worth highlighting some points about each of these “big results”.  In all cases, Google seek to follow a quantitative approach, looking at large sets of data, and checking results as systems are incrementally changed.  As Spector said, “more data is better…”

1. Totally transparent processing

Spector spelt out a vision encompassing each of four dimensions: processing should be “effectively transparent”:

  • Across all types of end-user access devices,
  • across all human languages (both formal and informal),
  • across all the modes of  information (eg text, images, audio, video, sensor data, maps, timelines),
  • and across every body of knowledge (both online and offline).

In this vision:

  • There should be “no dependence or occlusions because something has got in the way” or is in the wrong format;
  • There should be “fluidity across all these forms”.

Some subsets of this grand vision include solving “voice to text”, “image recognition”, “find similar images”, and “language translation”.  Spector claimed that progress was being made across many of these sub-problems.

2. Ideal distributed computing

Spector pointed out that

Distributed computing is 30 years old, but, not very deeply understood until recently;

There was a limitation of understanding of (truly) large-scale, open integrated distributed systems.

Particular aspects of distributed systems that had not been deeply understood included:

  • Requirements for systems in which the application needs (and APIs) are not known in advance;
  • Systems with 10^6 or even 10^7 processes, with consequent enormous complexity.

Spector claimed that – as in the case of transparent processing – “there has been lots of incremental progress done with distributed systems, picking away at problem areas”.

Improvements that can be expected for huge distributed systems of computers, arising from computer science research, include:

  • Online system optimisation;
  • Data checking – verifying consistency and validating data/config files;
  • Dynamic repair – eg find the closest feasible solution after an incident (computer broke down);
  • Better efficiency in energy usage of these systems;
  • Improvements in managing security and privacy.

3. Hybrid, not Artificial Intelligence

Hybrid intelligence is like an extension of distributed computing: people become part of the system that works out the answers.

Spector said that Google’s approach was:

To see if some problem can be solved by people and computers working together.

As a familiar example, Search doesn’t try to offer the user only the one best result.  It provides a set of results, and relies on the user picking answers from the list generated by the computer.

Hybrid intelligence can be contrasted with AI (artificial intelligence):

  • AI aims at creating computers as capable as people, often in very broad problem domains.  While progress has been made, this has turned out to be very challenging;
  • Instead, it has proven more useful for computers to extend the capability of people, not in isolation, and to focus on more specific problem areas.

Computer systems can learn from feedback from users, with powerful virtuous circles.  Spector said that aggregation of user responses has proven extremely valuable in learning, such as:

  • feedback in ranking of results, or in prioritising spelling correction options;
  • semi-supervised image content analysis / speech recognition / etc.

(This idea has evolved over time, and was previously known as “The combination hypothesis”: computers would become smarter if different methods of learning can be combined.  See for example the 2003 article “IBM aims to get smart about AI” from a time when Spector worked at IBM.  It’s good to see this idea bearing more and more fruit.)

Selected questions and answers

A couple of the questions raised by the audience at the end of the lecture were particularly interesting.

One questioner asked if Google’s guidelines for research projects specified any “no-go areas” that should be avoided.  Spector answered:

No one wants a creepy computer.  So the rule is … “don’t be creepy”.

(Which is an unusual twist on “don’t be evil”!)

Spelling this out in more detail:

  • Google aim to apply extremely neutral algorithms to ranking and crawling;
  • They want systems that are very responsive to users’ needs, without being in any way creepy;
  • Views on “what is creepy” may change over time (and may be different in different parts of the world).

A second audience member asked if there are risks to pursuing a quantitative, evolutionary approach to computer science problems.  Spector answered:

  • Yes, the research might get stuck in a local maximum;
  • So you can’t do everything “strictly by the numbers”.  But having the numbers available is a great guide.

Ongoing research

As I viewed this video, part of my brain told me that perhaps I should return to an academic life, in the midst of a computer science faculty somewhere in the world.

I share Spector’s conclusion:

It’s a time of unprecedented diversity and fertility in computer science – and amazing challenges abound;

The results from computer science should continue to make the world a better place.

Spector pointed out that key research challenges are published on the Google Research Blog.  Examples he listed included:

  • increasingly fluid partnership between people and computation;
  • fundamental changes in the methods of science;
  • rethinking the database;
  • CS+X, for all X (how Computer Science, CS, can assist and even transform other fields of study, X);
  • computing with ultra-low power (eg just ambient light as a power source).

Stop press: Google’s Focused Research Awards

Coincidentally, I see that Google have today made a new announcement about their support for research in specific areas of computer science, at a small number of universities worldwide.  The four areas of research are:

  • Machine Learning
  • Use of mobile phones as data collection devices for public health and environment monitoring
  • Energy efficiency in computing
  • Privacy.

It looks like “energy efficiency in computing” is receiving the largest amount of funding.  I think that’s probably the right choice!

1 February 2010

On the undue adulation for ‘You are not a gadget’

Filed under: books, collaboration, Open Source — David Wood @ 12:46 pm

Perhaps the most disturbing thing about Jaron Lanier’s new book “You are not a gadget: a manifesto” is the undue adulation it has received.

For example, here’s what eminent theoretical physicist Lee Smolin says about the book (on its back cover):

Jaron Lanier’s long-awaited book is fabulous – I couldn’t put it down and shouted out Yes! Yes! on many pages.

Smolin goes on:

Lanier is a rare voice of sanity in the debate about the relationship between computers and we human beings.  He convincingly shows us that the idea of digital computers having human-like intelligence is a fantasy.

However, when I read it, far from shouting out Yes! Yes! on many pages, the thoughts that repeatedly came to my mind were: No! No! What a misunderstanding! What a ridiculous straw man! How poor! How misleading!

The titles of reviews of Lanier’s book on Amazon.com show lots more adulation:

  • A brilliant work of Pragmatic “Techno-Philosophy” (a five-star review)
  • Thought provoking and worthy of your time (ditto)
  • One of the best books in a long while (ditto)
  • A tribute to humanity (ditto)

That last title indicates what is probably going on.  Many people feel uneasy that “humanity” is seemingly being stretched, trampled, lost, and reduced, by current changes in our society – including the migration of so much culture online, and the increasing ubiquity of silicon brains.  So they are ready to clutch at straws, with the hope of somehow reaffirming a more natural state of humanity.

But this is a bad straw to clutch at.

Interestingly, even one of the five star reviews has to remark that there are significant mistakes in Lanier’s account:

While my review remains positive, I want to point out one major problem in the book. The account of events on p. 125-126 is full of misinformation and errors. The LISP machine in retrospect was a horrible idea. It died because the RISC and MIPS CPU efforts on the west coast were a much better idea. Putting high-level software (LISP) into electronics was a bad idea.

Stallman’s disfunctional relationship with Symbolics is badly misrepresented. Stallman’s licence was not the first or only free software licence…

My own list of the misinformation and errors in this book would occupy many pages.  Here’s just a snippet:

1. The iPhone and UNIX

Initially, I liked Lanier’s account of the problems caused by lock-in.  But then (page 12) he complains, incredibly, that some UI problems on the iPhone are due to the fact that the operating system on the iPhone has had to retain features from UNIX:

I have an iPhone in my pocket, and sure enough, the thing has what is essentially UNIX in it.  An unnerving element of this gadget is that it is haunted by a weird set of unpredictable user interface delays.  One’s mind waits for the response to the press of a virtual button, but it doesn’t come for a while.  An odd tension builds during that moment, and easy intuition is replaced by nervousness.  It is the ghost of UNIX, still refusing to accommodate the rhythms of my body and my mind, after all these years.

As someone who has been involved for more than 20 years with platforms that enable UI experience, I can state categorically that delays in UI can be addressed at many levels.  It is absurb to suggest that a hangover from UNIX days means that all UIs on mobile devices (such as the iPhone) are bound to suffer unnerving delays.

2. Obssession with anonymous posters

Time and again Lanier laments that people are encouraged to post anonymously to the Internet.  Because people have to become anonymous, they are de-humanised.

My reaction:

  • It is useful that the opportunity for anonymous posting exists;
  • However, in the vast bulk of the discussions in which I participate, most people sign their names, and links are available to their profiles;
  • Rather than a sea of anonymous interactions, there’s a sea of individuals ready to become better known, each with their own fascinating quirks and strengths.

3. Lanier’s diatribe against auto-layout features in Microsoft Word

Lanier admits (page 27) that he is “all for the automation of petty tasks” by software.  But (like most of us) he’s had the experience where Microsoft Word makes a wrong decision about an automation it presumes we want to do:

You might have had the experience of having Microsoft Word suddenly determine, at the wrong moment, that you are creating an indented outline…

This type of design feature is nonsense, since you end up having to do more work than you would otherwise in order to manipulate the software’s expectations of you.

Most people would say this just shows that there are still bugs in the (often useful) auto-layout feature.  Not so Lanier.  Instead, incredibly, he imputes a sinister motivation onto the software’s designers:

The real [underlying] function of the feature isn’t to make life easier for people.  Instead, it promotes a new philosophy: that the computer is evolving into a life-form that can understand people better than people can understand themselves.

Lanier insists there’s a dichotomy: either a software designer is trying to make tasks easier for users, or the software designer has views that computers will, one day, be smarter than humans.  Why would the latter view (if held) mean the former cannot also be true? And why is “this type of design feature” nonsense?

4. Analysis of Alan Turing

Lanier’s analysis (and psycho-analysis) of AI pioneer Alan Turing is particularly cringe-worthy, and was the point where, for me, the book lost all credibility.

For example, Lanier tries to score points against Turing by commenting (page 31) that:

Turing’s 1950 paper on the test includes this extraordinary passage: “In attempting to construct such machines we should not be irreverently usurping His power of creating souls, any more than we are in the procreation of children: rather we are, in either case, instruments of His will providing mansions for the souls that He creates”.

However, referring to the context (Turing’s paper is available online here) indicates that Turing is, in the quoted passage, in the midst of seeking to engage with a number of different objections to his main hypothesis.  Each time, he seeks to enter into the mindset of people who might oppose his thinking.  This extract is from the section “The Theological Objection”.  Immediately after the section highlighted by Lanier, Turing’s paper goes on to comment:

However, this is mere speculation. I am not very impressed with theological arguments whatever they may be used to support. Such arguments have often been found unsatisfactory in the past. In the time of Galileo it was argued that the texts, “And the sun stood still . . . and hasted not to go down about a whole day” (Joshua x. 13) and “He laid the foundations of the earth, that it should not move at any time” (Psalm cv. 5) were an adequate refutation of the Copernican theory. With our present knowledge such an argument appears futile. When that knowledge was not available it made a quite different impression.

Given a choice between the analytic powers of Turing and those of Lanier, I would pick Turing very nearly 100% of the time.

5. Clay Shirky and the latent cognitive surplus

Lanier’s treatment of Clay Shirky’s ideas is equally deplorable – sleight of hand again distorts the original message.  It starts off fine, with Lanier quoting an April 2008 article by Shirky:

And this is the other thing about the size of the cognitive surplus we’re talking about. It’s so large that even a small change could have huge ramifications. Let’s say that everything stays 99 percent the same, that people watch 99 percent as much television as they used to, but 1 percent of that is carved out for producing and for sharing. The Internet-connected population watches roughly a trillion hours of TV a year. That’s about five times the size of the annual U.S. consumption. One per cent of that  is 100 Wikipedia projects per year worth of participation.

I think that’s going to be a big deal. Don’t you?

In Shirky’s view, there’s lots of time available for people to apply to creative tasks, if only they would spend less time watching sitcoms on TV.  Lanier pokes nauseasting fun at this suggestion, but only (page 49) by means of changing the time available into “seconds of salvaged” time.  (Who mentioned seconds?  Surely Shirky is talking about people applying themselves for longer than seconds at a time.)  Lanier labours his point with a ridiculous hyperbole:

How many seconds of salvaged erstwhile television time would need to be harnessed to replicate the achievements of, say, Albert Einstein?  It seems to me that even if we could network all the potential aliens in the galaxy – quadrillions of them, perhaps – and get each of them to contribute some seconds to a physics wiki, we would not replicate the achievements of even one mediocre physicist, much less a great one.

6. Friends and Facebook friends

Lanier really seems to believe (page 53) that people who use Facebook cannot distinguish between “Facebook friends” and “real world friends”.  He should talk more often to people who use Facebook, to see if they really are so “reduced” as he implies.

7. Lack of appreciation for security researchers

Lanier also rails (page 65) against people who investigate potential security vulnerabilities in software systems.

It seems he would prefer us all to live in ignorance about these potential vulnerabilities.

8. The Long Tail and individuals

Lanier cannot resist an ill-warranted attack on the notion of the long tail.  Describing a proposal of his own for how authors and artists could be rewarded for Internet usage of their material, Lanier makes the bizarre comment (page 101):

Note that this is a very different idea from the long tail, because it rewards individuals rather than cloud owners

Where did the assumption come from that writers who describe the Long Tail are only interested in rewarding “cloud owners” such as Amazon and Google?

9. All generations from Generation X onwards are somnolent

Lanier bemoans the blandness of the youth (page 128):

At the time that the web was born, in the early 1990s, a popular trope was that a new generation of teenagers, raised in the conservative Reagan years, had turned out exceptionally bland.  The members of “Generation X” were characterised as blank and inert.  The anthropologist Steve Barnett compared them to pattern exhaustion, a phenonemon in which a culture runs out of variations of traditional designs in their pottery and becomes less creative.

A common rationalisation is the fledgling world of digital culture back then is that we were entering a transitional lull before a creative storm – or were already in the eye of one.  But the sad truth is that we were not passing through a momentary lull before a storm.  We had instead entered a persistent somnolence, and I have come to believe that we will only escape it when we kill the hive.

My experience is at radical odds with this.  Through my encounters with year after year of graduate recruit intake at Symbian, I found many examples, each year, of youth full of passion, verve, and creativity.

The cloud which Lanier fears so much doesn’t stifle curiosity and creativity, but provides many means for people to develop a fuller human potential.

10. Open Source and creativity

Lanier complains that Open Source – and, more generally, Web 2.0 collaborative processes – has failed to produce anything of real value.  All it can do, he says (page 122 – and repeated numerous times elsewhere), is to imitate: Linux is a copy of UNIX and Wikipedia is a copy of Encyclopaedia Britannica.

But what about the UI creativity of Firefox (an open source web browser, that introduced new features ahead of the Microsoft alternative)?

How about the creativity of many of the applications on mobile devices, such as the iPhone, that demonstrate mashups of information from diverse sources (including location-based information).

Even to say that Wikipedia is derivative from Britannica misses the point, of course, that material in Wikipedia is updated so quickly.  Yes, there’s occasional unreliability, but people soon learn how to cross-check it.

It goes on…

For each point I’ve picked out above, there are many others I could have shared as well.

Lanier is speaking this evening (Monday 1st February) at London’s RSA.  The audience is usually respectful, but can ask searching questions.  This evening, if the lecture follows the same lines as the book, I expect to see more objections than usual.  However, I also expect there will be some in the audience who jump at the chance to defend humanity from the perceived incursions from computers and AI.

For a wider set of ojbections to Lanier’s ideas – generally expressed much more politely than my comments above – see this compendium from Edge.

My own bottom line view is that technology will significantly enhance human experience and creativity, rather than detract from it.

To be clear, I accept that there are good criticisms that can be made of the excesses of Web 2.0, open source, and so on.  For example, the second half of Nick Carr’s book “The Big Switch: Rewiring the World, from Edison to Google” is a good start.  (Andrew Orlowski produced an excellent review of Carr’s book, here.)  Lanier’s book is not a good contribution.

31 January 2010

Changing the topic: questions for aspiring political leaders

Filed under: general election, leadership, politics — David Wood @ 1:10 pm

Electioneering will be ramping up, in the UK, over the next few months.

As well as the question of “which politicians are the best choices to be voted into parliament”, there’s a broader question at stake:

  • What criteria should we be using, as an electorate in 2010, to assess aspiring politicians?

Of course, high on the list of criteria comes the matter of economic competence.  Which politicians are the most likely to be able to oversee an economic recovery?

Similarly, there’s the question of general trustworthiness: is this a person who can, on the whole, be trusted to be take hard decisions, and to follow through responsibly on the results of these decisions?

However, alongside that kind of traditional criteria, I’d like to try to inject some additional questions into the public debate.

My hope with these questions is to identify politicians who have responsible and well-informed techno-progressive views:

  • They understand the tremendous difference that can be made to the well-being of society by swift and thoughtful development and deployment of new technology;
  • They are aware of the drawbacks that new technology can bring, but they are able to assess these drawbacks within an overall positive and constructive framework;
  • They will not allow important questions of technology development to be submerged under lots of other day-to-day debate.

My list of questions is by no means final.  But I’d like to start somewhere.

So here goes.  Here’s my list of ten open questions, that I am preparing to ask whenever the chance arises.  Hopefully the answers that politicians give will provide an indication as to whether they have a good understanding of the huge transformative potential of science and technology.

  1. What are the most serious risks of major disasters affecting the UK in the next 20-40 years, and what do you think needs to be done about these risks?
  2. Under what circumstances would you approve of a government minister overruling the advice of an expert committee of scientists about a matter of science (eg whether a particular drug is harmful)?
  3. What’s your view of genetically engineered medicines and foods?
  4. What’s your view of nuclear energy?
  5. Would you approve of research into geo-engineering to counter possible runaway global warming?
  6. What kinds of medical research would you prioritise?
  7. What’s your reaction to the changing population demographics (where there’s an ever greater proportion of older people)?
  8. Which technology sectors do you see as most important for the future of this country?
  9. Do you approve of the way the current patent system interacts with the development of technologically innovative solutions?
  10. Do you think any special attention should be paid to the opinion of religious leaders over matters such as medical research or the application of technology?

Most of the questions have no “right” answers, but there are plenty of “bad” answers which would cause me to be distrustful of someone who gave that answer:

  • One set of bad answers is “techo-conservatism” – insisting on lots of caution with any new technology (similar to the people who demanded that a moving motor vehicle should be preceded by a pedestrian carrying a red flag);
  • Another set of bad answers is “techno-utopianism” – praising technology without appreciating its potential drawbacks (but I’m not expecting many aspiring politicians to make that mistake);
  • Finally, I fear answers that would indicate “techno-ignorance” – lack of practical awareness of the issues about new technology (nanotech, synthetic biology, new sources of energy, robotics, AI…).

I’m not expecting that any one party will have politicians who give uniformly good (or uniformly bad) answers to these questions.  The techno-progressive spectrum cuts across traditional party lines.

Are these the right questions?  What questions would you want to add to this list, or subtract from it?

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

29 January 2010

A strategy for mobile app development

Filed under: Agile, applications, consulting, fragmentation, mashup* event, mobile web — David Wood @ 12:15 am

The mashup* event in London’s Canary Wharf district yesterday evening – hosted by Ogilvy – addressed the question,

  • Apps: What’s your strategy?

The meeting was described as follows:

This event will help people in strategic marcomms roles understand the key challenges with respect to apps and identify the building blocks of an app strategy:

  • What are the platform choices?
  • What are the app store choices?
  • What devices should you support? …

mashup* is bringing together several industry experts and specialist developers to help demystify, clarify and explain the issues around the rapidly emerging Apps channel…

The event was sold out, and the room was packed.  I didn’t hear anyone question the need for companies to have a mobile strategy.  Nowadays, that seems to be taken for granted.  The hard bit is to work out what the strategy should be.

One of the speakers, Charles Weir of Penrillian, gave a stark assessment of the difficulty in writing mobile apps:

  • For wide coverage of different devices, several different programming systems need to be used – apart from those (relatively few) cases where the functionality of the app can be delivered via web technology;
  • Rather than the number of different mobile platforms decreasing, the number is actually increasing: fragmentation is getting worse;
  • Examples of relatively new mobile platforms include Samsung’s bada and Nokia’s Maemo.

One mobile strategy is to focus on just one platform – such as the Apple iPhone.  Another strategy is to prioritise web-based delivery – as followed by another speaker, Mark Curtis, for the highly-successful Flirtomatic app.  But these restrictions may be unacceptable to companies who:

  • Want to reach a larger number of users (who use different devices);
  • Want to include richer functionality in their app than can be delivered via standard mobile browsers.

So what are the alternatives?

If anything, the development situation is even more complex than Charles described it:

  • Mobile web browsing suffers from its own fragmentation – with different versions of web browsers being used on different devices, and with different widget extensions;
  • Individual mobile platforms can have multiple UI families;
  • Different versions of a single mobile platform may be incompatible with each other

The mobile industry is aware of these problems, and is pursing solutions on multiple fronts – including improved developer tools, improved intermediate platforms, and improved management of compatibility.  For example, there is considerable hope that HTML 5.0 will be widely adopted as a standard.  However, at the same time as solutions are found, new incompatibilities arise too – typically for new areas of mobile functionality.

The suggestion I raised from the floor during the meeting is that companies ought in general to avoid squaring up to this fragmentation.  Instead, they should engage partners who specialise in handling this fragmentation on behalf of clients.  Fragmentation is a hard problem, which won’t disappear any time soon.  Worse, as I said, the nature of the fragmentation changes fairly rapidly.  So let this problem be handled by expert mobile professional services companies.

This can be viewed as a kind of “mobile apps as a service”.

These professional services companies could provide, not only the technical solutions for a number of platforms, but also up-to-date impartial advice on which platforms ought to be prioritised.  Happily, the number of these mobile-savvy professional services companies (both large and small) is continuing to grow.

My suggestion received broad general support from the panel of speakers, but with one important twist.  Being a big fan of agile development, I fully accept this twist:

  • The specification of successful applications is rarely fixed in advance;
  • Instead, it ought to evolve in the light of users’ experience with early releases;
  • The specification will therefore improve as the project unfolds.

This strongly argues against any hands-off outsourcing of mobile app development to the professional services company.  Instead, the professional services company should operate in close conjunction with the domain experts in the original company.  That’s a mobile application strategy that makes good sense.

28 January 2010

The iPad: more for less?

Filed under: Apple, complacency, iPhone, strategy — David Wood @ 12:36 pm

There are plenty of reasons to be critical about the Apple iPad.  If they feel inclined, Apple’s competitors and detractors can lick their lips.

For example, an article in Gizmodo enumerates “8 Things That Suck About the iPad“:

  1. Big, Ugly Bezel
  2. No Multitasking
  3. No Cameras
  4. Touch Keyboard
  5. No HDMI Out
  6. The Name “iPad”
  7. No Flash
  8. Adapters, Adapters, Adapters (“if you want to plug anything into this, such as a digital camera, you need all sorts of ugly adapters. You need an adapter for USB for god’s sake”)
  9. It’s Not Widescreen
  10. Doesn’t Support T-Mobile 3G (“it uses microSIMs that literally no one else uses”)
  11. A Closed App Ecosystem.

(The last three items on the list were added after the article was first published.)

In similar vein, Robert Scoble reported the view of his 16 year old son: “iFail“:

  1. It isn’t compelling enough for a high school student who already has a Macintosh notebook and an iPhone.
  2. It is missing features that a high school student would like, like handwriting recognition to take notes, a camera to take pictures of the board in class (and girls), and the ability to print out documents for class.
  3. He hasn’t seen his textbooks on it yet, so the usecase of replacing heavy textbooks hasn’t shown up yet.
  4. The gaming features aren’t compelling enough for him to give up either the Xbox or the iPhone. The iPhone wins because it fits in his pocket. The Xbox wins because of Xbox live so he can play against his friends (not to mention engaging HD quality and wide variety of titles).
  5. He doesn’t like the file limitations. His friends send him videos that he can’t play in iTunes and the iPad doesn’t support Flash.
  6. It isn’t game changing like the iPhone was.

However, let’s remember that iPhone initially received a similar swathe of criticisms.  It, too, omitted lots of features that everyone took for granted would need to be part of a successful smartphone: multi-tasking, 3G, MMS, copy-and-paste…

The iPad shouldn’t be judged against existing markets.  Rather than participating in a “red ocean” that’s already swarming with active competitors, it has the chance to define and participate in an empty “blue ocean”.

  • Here, I’m using the language of W. Chan Kim and Renée Mauborgne of INSEAD.
  • Blue ocean products avoid matching existing products feature-for-feature.
  • They miss out some items completely, but, instead, deliver big time on some other points.

It’s similar to how Palm made the first commercially successful pen-based handheld computer.  In comparison to predecessors – like the Casio Zoomer, the General Magic “Magic Cap”, and (ironically) the Apple Newton – the Palm Pilot delivered much less functionality.  But what it did deliver was a delight to use.  (I made a similar point in an earlier blog posting, reviewing the growth of the iPhone market share: “Market share is no comfort“.)

This is the “less is more” philosophy.  It’s a good philosophy!

Around the world, hundreds of millions of people are saying to themselves: the iPad is not for them.  But a different, large, group of potential users are likely to be interested.

It’s early days, but it looks as if the iPad will support excellent browsing of many kinds of content – content that previously would be read in physical books, newspapers, and magazines.  That’s a big market.

What’s more, reports suggest that the iPad packs tremendous speed.  For example, John Gruber reports the following on Daring Fireball:

…the iPad is using a new CPU designed and made by Apple itself: the Apple A4. This is a huge deal. I got about 20 blessed minutes of time using the iPad demo units Apple had at the event today, and if I had to sum up the device with one word, that word would be “fast”.

It is fast, fast, fast…

I expected the screen size to be the biggest differentiating factor in how the iPad feels compared to an iPhone, but I think the speed difference is just as big a factor. Web pages render so fast it was hard to believe. After using the iPhone so much for two and a half years, I’ve become accustomed to web pages rendering (relative to the Mac) slowly. On the iPad, they seem to render nearly instantly. (802.11n Wi-Fi helps too.)

The Maps app is crazy fast. Apps launch fast. Scrolling is fast. The Photos app is fast.

…everyone I spoke to in the press room was raving first and foremost about the speed. None of us could shut up about it. It feels impossibly fast.

Speed, for the iPad, might the special extra blast of usability that the new pen interface was the iPhone.

25 January 2010

Registration is open: Humanity+, UK 2010

Filed under: UKH+ — David Wood @ 2:44 pm

The Humanity+, UK 2010 conference, to be held in central London on Saturday 24th April, will be tackling some big questions:

  • How will accelerating technological change effect human mental and physical capabilities?
  • What opportuntities and problems will these changes bring?
  • How will these changes impact the environment?
  • How will society, political leaders and institutions react?

In short, Humanity+, UK 2010 is a conference about the future of humanity and the future of technology – and about the forthcoming radical impact of technology on our lives.

Along with some friends and colleagues, I’ve been helping to arrange a first-class line up of speakers for this event.  The agenda is available online:

9:15 – 9:45 REGISTRATION & NETWORKING
9:45 – 10:00 The Humanity+ agenda
David Wood
10:00 – 10.40 Singularity Skepticism: Exposing Exponential Errors
Max More
10.40 – 11.10 Making humans smarter via cognitive enhancers
Anders Sandberg
11.10 – 11.40 The impact of living technology on the future of humanity
Rachel Armstrong
11.40 – 12.00 Panel Discussion: The Future of Humanity – Part 1
12:00 – 13:00 LUNCH & POSTER SESSION
13:00 – 13:40 Human regenerative engineering – theory and practice
Aubrey de Grey
13:40 – 14:10 The Abolitionist Project: Can biotechnology abolish suffering throughout the living world?
David Pearce
14:10 – 14:40 Augmented perception and Transhumanist Art
Amon Twyman
14:40 – 15:00 AFTERNOON BREAK
15:00 – 15:30 DIY Enhancement
Natasha Vita-More
15:30 – 16:00 1: The Singularity University; 2: The Internet of Things
David Orban
16:00 – 16:40 Reducing Existential Risks
Nick Bostrom
16:40 – 17:00 Panel Discussion: The Future of Humanity – Part 2

The speakers include many of the pioneering thinkers of the modern transhumanist movement – also known as “Humanity+“.  It’s a welcome outcome that they will all be in London on the same day.

The conference website is now open for registration.

One reason to register early is that we are planning to organise a dinner in the evening, where it will be possible to continue the discussions from the day, in the company of several of the speakers.  Attendance at the dinner will, necessarily, be limited.  We’ll offer tickets to the people who have already registered, in the order in which they registered 🙂

If you’re not yet sure whether this conference will be of interest to you, then keep an eye on the conference blog in the weeks ahead, where we’ll be discussing more aspects of what the speakers are likely to cover in their talks.

If any member of the press would like to book interview time with one or more of the speakers, please get in touch.

Towards 50 billion connected mobile devices?

Filed under: Connectivity, Internet of Things, M2M — David Wood @ 2:33 am

Some time around December 2008, the number of mobile phone connections worldwide reached – according to an estimate by Informa Telecoms & Media – the staggering total of 4 billion.

The growth of mobile phone usage has been meteoric.  Quoting Wireless Intelligence as its source, an article in Gizmag tracks the rise:

  • The first commercial citywide cellular network was launched in Japan by NTT in 1979
  • The milestone of 1 billion mobile phone connections was reached in 2002
  • The 2 billion mobile phone connections milestone was reached in 2005
  • The 3 billion mobile phone connections milestone was reached in 2007
  • The 4 billion mobile phone connections milestone was reached in February 2009.

How much further can this trend continue?

One line of reasoning says that this growth spurt is bound to slow down, since there are only 6.8 billion people alive on the planet.

However, another line of reasoning points out that:

  • People often have more than one mobile phone connection
  • Mobile phone connections can be assigned to items of equipment (to “machines”) rather than directly to humans.

In this second line of reasoning, there’s no particular reason to expect any imminent cap on the number of mobile phone connections.  For example, senior representatives from Ericsson have on several occasions talked of the possibility of 50 billion connected devices by 2020:

In this line of thinking, the following types of machinery would all benefit from having a wireless network connection:

  • Cars;
  • Energy meters (such as electricity meters);
  • Units used in HVAC (Heat, Ventilating, and Air Conditioning);
  • Mobile point-of-sales terminals;
  • Vending machines;
  • Security alarms;
  • Data storage devices (including electronic book readers);
  • Devices used in navigation;
  • Devices used in healthcare.

These devices are, in general, not mobile phones in any traditional sense.  They are not used for voice communication.  However, they have requirements to share data about their state – including allowing remote access to assess how well they are operating.  The phrase “embedded connectivity” is used to describe them.  Another phrase in common use is “M2M” – meaning “machine to machine”.  As interest in tracking energy usage and resource usage grows, so will the requirement for remote access to meters and monitors of all sorts.  An article in Social Machinery reports:

M2M devices will account for a significant share of new mobile network connections on developed markets in the coming years. The main reason is the high penetration of mobile subscriptions and the proliferation of devices. Today, Europeans have more than 14 devices at home waiting to be connected according to Ericsson consumer research.

If by 2020 there are some 3.5 billion people worldwide with as many devices waiting to be connected as today’s average European, we quickly reach a figure of 50 billion devices with embedded connectivity.

Or do we?

This Tuesday and Wednesday, I’ll be taking part in the Informa “Embedded Connectivity” conference in London.  Informa have assembled a very interesting mix of speakers and panellists, covering all aspects of the emerging embedded connectivity industry.  As well as listening carefully to the presentations and (hopefully) asking some pertinent questions from the floor, I’ll be:

  • Chairing the Day One panel “Building the Business Models for a Connected Future”;
  • Speaking on the Day Two panel “The Future of Connectivity”.

One thing I’ll be keen to do is to understand the context of various predictions about the size of this market.  Indeed, although the figure of 50 billion connected devices is already astonishingly large, it’s by no means the largest figure that has been banded around:

For example, Amdocs recently spoke in a press release about:

A not-too-distant future, when more than one trillion devices will be connected to the network, an industry phenomenon the company calls “Tera-play.”

(Here, Tera is one thousand times Giga, namely shorthand for a trillion.)

Similarly, IBM regularly reference a prediction by IDC:

By 2011, IDC estimates, there will be one trillion Internet-connected devices, up from 500 million in 2006.

IBM repeat this figure in their advance publicity for next month’s Mobile World Congress in Barcelona:

Soon there will be 1 trillion connected devices in the world. A smarter planet will require a smarter communications infrastructure. When things communicate, systems connect. And when systems connect, the world gets smarter. Together let’s build smarter communications.

How do we make sense of these radically different predictions (1 trillion vs. 50 billion)?

Is it just a matter of time?

More likely, it’s a matter of different kinds of connectivity.  In all, there are probably at least five levels of connectivity.

Level 1 is the rich and varied connectivity of a regular  mobile phone, driven by a human user.  This can have attractive levels of ARPU (average revenue charges per user) for network operators.

Level 2 involves many of the devices that I mentioned earlier.  These devices will contain a cellular modem, over which data transfer takes place.  This connectivity falls under the label “machine to machine” rather than directly involving a human.  It’s generally thought that the ARPU for M2M will be less than the ARPU for smartphones.  For example, there might only be a data transfer of several kilobytes, every week or so.  Network operators will be interested in these devices because of their numbers, rather than their high ARPUs.

I’ll skip Level 3 for now, and come back to it afterwards.

Level 4 is where we reach the figure of one trillion connected devices.  However, these devices do not contain a cellular modem.  Nor, in most cases, do they initiate complex data transfers.  Instead, they contain an RFID (Radio Frequency IDentification) tag.  These tags are significantly cheaper than cellular modems.  They can be used to identify animals, items of luggage, retail goods, and so on.  Other sensors keep track of whether items with particular RFID tags are passing nearby.  The local data flow between sensor and RFID tag will not involve any cellular network.

Level 5 takes the idea of “everything connected” one stage further, to the so-called semantic internet, in which clumps of data carry (either explicitly or implicitly) accompanying metadata that identifies and describes the content of that data.  This is an important idea, but there’s no implication here of wireless connectivity.  I include this level in the discussion because the oft-used phrase “The Internet of Things” sometimes applies to Level 4 connectivity, and sometimes to Level 5 connectivity.

So where does the idea of 50 billion connected devices fit in?

An ABIresearch report, “Cellular M2M Connectivity Service Providers“, which is available (excerpted) from the website of Jasper Wireless, makes a good point:

The cost of cellular M2M solutions can be an inhibitor for some applications. Mainstream wireless modules range from approximately $25 to $90. These cost points make them difficult to integrate into some end devices, such as utility meters. A key reason for integration of ZigBee and other SRW (Short-Range Wireless) and PLC (Powerline Carrier) technologies into utility meters for AMI (Advanced Metering Infrastructure) applications is that many utilities do not feel a financially sound business case can be made for the integration of a cellular connection into every meter. Rather, a single meter, or concentrator, receives a cellular connection and is, in turn, connected to a group of local meters through less-expensive SRW or PLC connections.

In other words, there may be many devices whose individual wireless connectivity (Level 3 connectivity):

  • Is more complicated than an individual RFID tag (Level 4), but
  • Is simpler (and less expensive) than cellular modems (Level 2).

As time passes, the reducing cost of wireless modules will increase the likelihood that solutions will consider deploying them more widely.  However, at the same time, the simpler hardware options mentioned will also decrease in cost.

It’s for these reasons that I’m inclined to think that the number of cellular modems in 2020 will be less than the above ballpark figure of 50 billion.  But I’m ready to change my mind!

Footnote: A useful additional prediction data point has just been issued by Juniper Research:

The number of Mobile Connected M2M and Embedded Devices will rise to almost 412 million globally by 2014 with several distinct markets accounting for the increase in their number.

The markets include: Utility metering, Mobile Connected Buildings, Consumer & Commercial Telematics and Retail & Banking Connections. These areas will all show substantial growth in both device numbers and in the service revenues they represent, while Healthcare monitoring applications will begin to reach the commercial rollout stage from 2012.

“The most widespread category will be connections related to smart metering, driven partly by government initiatives to reduce carbon emissions,” says Anthony Cox, Senior Analyst at Juniper Research. Other areas, such as the healthcare sector, will ultimately see more potential in achieving service revenues, he says.

21 January 2010

Selecting the most exciting mobile startups

Filed under: Barcelona, innovation, Mobile Monday, startups — David Wood @ 3:32 pm
  • Study the online details of each of 50 attractive mobile startup companies;
  • Identify, from this list, the 10 that are “the best of the best”.

That was the challenge posed to me earlier this week by Rudy de Waele, the Simon Cowell of the mobile industy.

As in previous years, the Monday of Mobile World Congress week – when the mobile industry congregates in Barcelona – will feature an Mobile Premier Awards event.  This event will feature a number of quickfire pitches by companies selected by Mobile Monday chapters worldwide.  These companies are competing for a number of awards, including the Mobile Premier Award in Innovation.

By this stage in the contest, there are 50 candidates.  Each has been selected as the result of a process in one of the Mobile Monday chapters.  We’re now at the stage of reducing this list to 20, to avoid the event in Barcelona stretching on too long in time.  Responsibility for this reduction falls to a group of people described as “an online jury of industry experts”.

I was honoured to be asked to take part in this jury, but at the same time I was apprehensive.  It’s a considerable responsibility to look at the information about each of 50 companies, and to find the most deserving 10 from that list.  (Each jury member picks 10.  The organisers aggregate the votes from all 25 jury members, and the top-scoring 20 companies are invited to make a pitch at the event in Barcelona.)

The guidelines to jury members asked that we evaluate each candidate based on:

  • originality, creativity and innovation;
  • technical and operational feasibility;
  • economic and financial viability.

I created a spreadsheet for my own use and started following links Rudy provided me to entries for each of the companies on dotopen.com.  In turn these entries pointed to other info, such as the companies’ own websites.

As I anticipated, the selection process was far from easy!  I quickly found 10 companies that I thought definitely deserved to attend Barcelona – and I had only searched about one third of the way through the list of nominees…

Occasionally I thought that a particular entry looked comparatively uninteresting (for example, that it was a “Me too” offering).  But when I clicked onto the company’s own website and started looking in more detail at what they had done, I would I think to myself “Mmm… this startup has a strong proposition after all”.

However, by close of play yesterday I had made my selection.  It’s inappropriate for me to publicly mention any companies at this time.  But I will say that I expect the event in Barcelona will give strong evidence of some companies executing well on some very interesting business ideas.

19 January 2010

Mobile phones and sustainability

Filed under: Energy, GreenTouch, Mobile Monday, sensors, sustainability — David Wood @ 1:55 am

What role can mobile phones play in reducing energy usage worldwide and assisting the transformation to a sustainable economy?  More widely, what role can the mobile phone industry play in this whole process?

That topic was addressed at yesterday’s Mobile Monday London event, held (unusually) in Brighton.  One of the organisers, Jo Rabin, commented:

As any Londoner knows, Brighton is one of the further suburbs, and like the rest of South London, not on the tube. That said, a modest 50 minutes and £10 return advance booking gets you there in comfort from London’s convenient Victoria station (and others)

The event was entitled “Mobile Application Sustainability” and featured:

One striking claim from near the beginning of the event was when Galit Zadok described the mobile phone as “the least sustainable item of consumer electronics, ever” – on account of the very high numbers of mobile phones which are replaced every year.  To quote from the Green Switch paper (PDF):

an average replacement rate of 18 months, accounting for 500 million handsets replaced last year in Europe alone, … makes the mobile phone the consumer electronic device with the highest replacement rate in history

Galit noted some positive developments too, mainly over phone chargers.  Again quoting from the Green Switch paper:

Regulation is encouraging manufacturers to make reductions in no-load energy demands, and handset manufacturers are responding.  By 2008 Sony Ericsson reduced the average no-load power consumption by more than 90%, whilst Nokia has achieved 80% reduction.

To further spur the industry into action, in October 2009, the ITU has given its stamp of approval to an energy-efficient one-charger-fits-all new mobile phone solution. The new Universal Charging Solution (UCS) enables the same charger to be used for all future handsets, regardless of make and model. In addition to dramatically cutting the number of chargers produced, shipped and subsequently discarded as new models become available, the new standard will reduce the energy consumed by the charger. The new UCS standard was based on input from the GSMA, which predicts elimination of 51,000 tonnes of redundant chargers, and a subsequent reduction of 13.6 million tonnes in greenhouse gas emissions each year.

I was less convinced when listening to the claims of the Green Switch speakers that:

  • The power consumption of the handsets themselves amounts to a significant proportion of overall human energy usage;
  • The handset power consumption problem becomes worse, with more and more applications included on the device;
  • Therefore people should be encouraged to use simpler devices – or to run their devices in a “green” mode in which fewer applications are enabled.

To be clear, I’m all in favour of reducing the power used by mobile phone applications, since this will lead to longer periods between battery charging, and will therefore improve user experience.  Short battery life is a long-standing deeply difficult issue for manufacturers of smart mobile handsets.  I’ve also long recognised the problems that are posed as the amount of software included on a device increases.  For example, here’s an excerpt of an “Insight” piece that I wrote for the symbian.com website in November 2006 (copy available here):

Standing in opposition to the potential for swift continuing increase in mobile technology, however, we face a series of major challenges.  I call them “horsemen of the apocalypse”.  They include fire, flood, plague, and warfare.

Fire” is the challenge of coping with the heat generated by batteries running ever faster.  Alas, batteries don’t follow Moore’s Law.  As users demand more work from their smartphones, their battery lifetimes will tend to plummet.  The solution involves close inter-working of new hardware technology (including multi-core processors) and highly sophisticated low-level software.  Together, this can reduce the voltage required by the hardware, and the device can avoid catching fire (or otherwise drawing too much power) as it performs its incredible calculations.

Flood” is the challenge of coping with enormous quantities of additional software.  Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos breaks loose: software projects delay almost indefinitely in their integration phase (think of Windows Longhorn), and users struggle to find their favourite functionality in amongst seething masses of menu options.  As summarised in Brooks’ Law (which ought to be as famous as Moore’s), “Adding manpower to a late software project makes it later”.  In other words, too many cooks spoil the broth.  Like the problem of fire, flood requires more than just money or people to solve.  It requires the right core software architecture, which allows add-on software to co-exist harmoniously…

So I care about the problems of power usage on mobile phones, and about the problems arising from an abundance of software on these devices.  However, I think it’s misleading to characterise these problems as problems of sustainability.

Here, my thinking follows the lead of David Mackay, Chief Scientific Advisor to the UK government’s Department of Energy and Climate Change, as spelt out in his book “Sustainable Energy – Without the Hot Air” and in other writing:

Turning phone chargers off when they are not in use is a feeble gesture, like bailing the Titanic with a teaspoon.

The widespread inclusion of “switching off phone chargers” in lists of “10 things you can do” is a bad thing, because it distracts attention from more effective actions that people could be taking.

(For some more details, page 70 of David Mackay’s book compares power consumption for different household items.)

Nevertheless, despite this quibble, I strongly agree that there’s a great deal that the mobile phone industry should be doing, to reduce energy usage worldwide and assist the transformation to a sustainable economy:

  1. As various speakers noted, applications mobile phones can collect (via various sensors) useful information about a person’s overall energy usage, and present this information back to the user.  Here, rather than being part of the problem, the mobile phone can be part of the solution;
  2. Mobile phones can also help communicate ideas about alternative energy solutions to users – solutions that are relevant to what the user is currently doing;
  3. Improved recycling of mobile phones will help too: making more phones software upgradable will be a step forward;
  4. There’s considerable scope for reducing the energy consumption on the server side of mobile phone networks (where it matters most).

A press release from yesterday highlights an example of the final point.  The press release is entitled “M1 looks at 35% reduction in carbon footprint in Singapore“.  Here’s an excerpt:

MobileOne (M1), the leading mobile operator in Singapore, expects to achieve up to 35% reduction of its telecommunications networks carbon footprint by early 2011. This is made possible by Nokia Siemens Networks Flexi Multiradio base stations. The vendor is currently modernizing M1’s 2G network to prepare it for a smooth transition to Long Term Evolution (LTE).

In addition, M1 is set to start an LTE trial in February 2010. Undertaken in collaboration with Nokia Siemens Networks, the trial will last two months and marks another step in M1’s commitment to deliver an energy efficient, high-speed mobile broadband service to its subscribers.

The LTE trial includes Nokia Siemens Networks’ Flexi Multiradio Base Stations that enhance network coverage and capacity, while lowering site power consumption significantly. This forms part of its end to end Energy Solutions portfolio, which is a clear commitment from Nokia Siemens Networks to drive innovative solutions for energy efficiency…

(Thanks to Stefan Constantinescu, for drawing attention to this particular press release.)

If a 35% carbon footprint reduction sounds impressive, here’s an even larger figure to consider. The newly formed Green Touch consortium announced a bold vision as part of their launch activities last week:

We aim to reduce energy consumption in worldwide ICT networks by a factor of 1000.

This is reiterated in the Green Touch description of “challenges and opportunities“:

The goal of this new consortium is to create the technologies needed to make communications networks 1000 times more energy efficient than they are today.

A thousand-fold reduction is roughly equivalent to being able to power the world’s communications networks, including the Internet, for three years using the same amount of energy that it currently takes to run them for a single day.

An early goal for this initiative is to deliver, within five years, a reference architecture, specifications, technology development roadmap and demonstrations of key components needed to realize a fundamental re-design of networks (including the introduction of entirely new technologies) that can reduce energy consumption – both by individuals and in aggregate – by 1000 times as compared to current levels.

Through a focused and collaborative cross-industry initiative, we intend to define the challenge, conduct breakthrough research, and deliver innovative new technologies and sustainable solutions that can be applied across ICT and beyond — for a greener and more sustainable communications future and for the benefit of all.

Their webpage “ICT Industry Combats Climate Change” provides more details:

Research from Bell Labs determined that today’s ICT networks have the potential to be 10,000 times (four orders of magnitude) more efficient then they are today. This conclusion comes out of Bell Labs’ fundamental analysis of the underlying components of ICT networks and technologies (optical, wireless, electronics, processing, routing, architecture, etc.) and studying their physical limits by applying established formulas such as Shannon’s Law, ‘father of information theory’.

Achieving even one-tenth of Shannon’s lower limit would cut network energy consumption by a factor of 1,000. A thousand-fold reduction in energy consumption is roughly equivalent to being able to power the world’s communications networks, including the Internet, for three years using the same amount of energy that it currently takes to run them for a single day.

These huge gains can only be achieved by rethinking the way telecom networks are designed in terms of low energy processing. Today’s networks are designed for optimal capacity, not efficient energy use. What is needed is a major breakthrough, a radical re-design of networks, and that can only be achieved through the contributions of all essential participants, from basic and applied researchers and component suppliers to network operators, equipment and system suppliers and governments.

While these re-designed networks would dramatically decrease direct ICT energy consumption, the energy savings would be overshadowed by the indirect effects. Because ICT constitutes what the World Economic Forum describes as “our collective nervous system,” touching nearly every industry sector2 a shift in the magnitude of ICT energy usage would reverberate throughout the global economy. By further enabling energy efficiencies across the energy-hungry portions of human enterprise, the ICT sector holds the potential to substantially contribute to the fight against climate change on a global scale…

What kind of people are behind this consortium?  It’s an impressive list:

Service Providers: AT&T, China Mobile, Portugal Telecom, Swisscom, Telefonica

Academic Research Labs: The Massachusetts Institute of Technology’s (MIT) Research Laboratory for Electronics (RLE), Stanford University’s Wireless Systems Lab (WSL), the University of Melbourne’s Institute for a Broadband-Enabled Society (IBES)

Government and Nonprofit Research Institutions: The CEA-LETI Applied Research Institute for Microelectronics (Grenoble, France), The Foundation for Mobile Communications (Portugal), imec (Headquarters: Leuven, Belgium), The French National Institute for Research in Computer Science and Control (INRIA)

Industrial Labs: Bell Labs, Samsung Advanced Institute of Technology (SAIT), Freescale Semiconductor.

The press release also contains endorsements from:

  • Dr. Steven Chu, US Secretary of Energy
  • Ed Miliband, Secretary of State for Energy and Climate Change, UK
  • Christian Estrosi, Minister for Industry, France
  • Jong-Soo Yoon, Director General, Ministry of Environment, South Korea
  • Paulo Campos, Secretary of State for Public Works and Communications, Portugal

Next time MoMo London looks at the topic of mobile sustainability, I hope there will be time to include an update on progress from the Green Touch team!

Footnote: Here’s a ten minute video summary of last week’s press conference launching Green Touch:

« Newer PostsOlder Posts »

Blog at WordPress.com.