dw2

23 November 2008

Problems with panels

Filed under: communications, passion — David Wood @ 6:56 pm

As an audience member, I’ve been at the receiving end of some less-than-stellar panel discussions at conferences in the last few months. On these occasions, even though there’s good reason to think that the individuals on the panels are often very interesting in their own right, somehow the “talking heads” format of a panel can result in low energy and low interest. The panellists make dull statements in response to generic questions and … interest seeps away.

On the other hand, I’ve also recently seen some outstandingly good panels, where the assembled participants bring real collective insight, and the audience pulse keeps beating. Here are two examples:

The format of this fine RSA panel was in the back of my mind as I prepared, last Monday, to take part in a panel myself: “What’s so smart about Smartphone Operating Systems“, at the Future of Mobile event in London. I shared the stage with some illustrious industry colleages: Olivier Bartholot of Purple Labs, Andy Bush of the LiMo Foundation, Rich Miner of Android, James McCarthy of Microsoft, and the panel chair, Simon Rockman of Sony Ericsson. I had high hopes of the panel generating and conveying some useful new insights for the audience.

Alas, for at least some members of the audience, this panel fell into the “less-than-stellar” category mentioned above, rather than the better examples:

  • Tomaž Štolfa, writing in his blog “Funky Karaoke“, rated this panel as just 1 out of 5, with the damning comment “a bunch of mobile OS guys, talking about the wrong problems. Where are cross platform standards?!?”; Tomaž gave every other panel or speaker a rating of at least 3 out of 5;
  • Adam Cohen-Rose, in his blog “Expanding horizons“, summed up the panel as follows: “This was a rather boring panel discussion: despite Simon’s best attempts to make the panellists squirm, they stayed very tame and non-committal. The best bits was the thinly veiled spatting between Microsoft and Google — but again, this was nothing new…”;
  • The Twitter back-channel for the event (“#FOM“) had remarks disparaging this panel as “suits” and “monologue” and “big boys”.

It’s true that I can find other links or tweets that were more complimentary about this panel – but none of these comments pick this panel out as being one of the highlights of the day.

As someone who takes communication very seriously, I have to ask myself, “what went wrong?” – and, even more pertinently, “what should I do differently, for future panels?”.

I toyed for a while with the idea that over-usage of Twitter by some audience members diminishes the ability of these audience members to concentrate sufficiently and to pick out what’s actually genuinely interesting in what’s being said. This is akin to Nicholas Carr’s argument that “Google is making us stupid“:

“Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle…”

After all, I do think that I said something interesting when it was my turn to speak – see the script I prepared in advance. But after more reflection, I gave up on the idea of excusing the panel’s poor rating by that kind of self-serving argument (which blames the audience rather than panellists). That was after I remembered my own experience as being on the receiving end of lots of uninspiring panels – as I mentioned earlier. Further, I remembered that, when these panels started to become boring, my own attention would wander … so I would miss anything more interesting that was said later on.

So on reflection, here are my conclusions, for avoiding similar problems with future panels:

  1. Pre-prepared remarks are fine. There’s nothing wrong in itself with having something prepared to say, that takes several minutes to say it. These opening comments can and should provide better context for the Q&A part of the panel that follows;
  2. However, high energy is vital; especially with an audience where people might get distracted, I ought to be sure that I speak with passion, as well as with intellectual rigour; this may be hard when we’re all sitting down (that’s why sofa panels are probably the worst of all), but it’s not impossible;
  3. The first requirement is actually to be sure the audience is motivated to listen to the discussion – the panel participants need to ensure that the audience recognise the topic as sufficiently relevant. On reflection, our “mobile operating systems” panel would have been better placed later on in the agenda for the day, rather than right at the beginning. That would have allowed us to create bridges between problems identified in earlier sessions, and the solutions we wanted to talk about;
  4. Less is more” can apply to interventions in panels as well as to product specs (and to blogs…); instead of trying to convey so much material in my opening remarks, I should have prioritised at most two or three soundbites, and looked to cover the others during later discussion.

These are my thoughts for when I participate as a panellist on someone else’s panel. When I am a chair (as I’ll be at the Symbian Partner Event next month in San Francisco) I’ll have different lessons to bear in mind!

21 November 2008

Emulating the human brain

Filed under: AGI, brain simulation, UKTA — David Wood @ 7:00 pm

Artificial Intelligence (AI) already does a lot to help me in my life:

  • The real-time route calculation (and re-calculation) capabilities of my TomTom satnav system are extremely handy;
  • The automated language translation functionality inside Google web-search, whilst far from perfect, often allows me to understand at least the gist of webpages written in languages other than English;
  • The intelligent recommendation engine of Amazon frequently brings books to my attention that I am glad to investigate further.

On the other hand, the field of general AI has failed to progress as quickly as some of its supporters over the years had hoped. The Wikipedia article on the History of AI lists some striking examples of significant over-optimism among leading AI researchers:

  • 1958, H. A. Simon and Allen Newell: “within ten years a digital computer will be the world’s chess champion” and “within ten years a digital computer will discover and prove an important new mathematical theorem.”
  • 1965, H. A. Simon: “machines will be capable, within twenty years, of doing any work a man can do.”
  • 1967, Marvin Minsky: “Within a generation … the problem of creating ‘artificial intelligence’ will substantially be solved.”
  • 1970, Marvin Minsky (in Life Magazine): “In from three to eight years we will have a machine with the general intelligence of an average human being.”

Prospects for fast progress with general AI remain controversial. As we gather more and more silicon power into smartphones and other computers, will this mean these devices become more and more intelligent? Or will they simply be fast rather than generally intelligent?

In this context, one interesting line of analysis is to consider a separate but related question: to what extent will it be possible to create a silicon emulation of the brain itself (rather than to focus on algorithms for intelligence)?

My friend Anders Sandberg, Neuroethics researcher at the Future of Humanity Institute, Oxford University, will be addressing this question in a presentation tomorrow afternoon (Saturday 22nd November) in Central London. The presentation is entitled “Emulating brains: silicon dreams or the next big thing?

Anders describes his talk as follows:

The idea of creating a faithful copy of a human brain has been a popular philosophical thought experiment and science fiction plot for decades. How close are we to actually doing it, how could it be done, and what would the consequences be? This talk will trace trends in computing, neuroscience, lab automaton and microscopy to show how whole brain emulation could become feasible in the mid term future.

The talk is organised by the UKTA. Last weekend, at the Convergence08 “unconference” in Mountain View, California, Anders gave an earlier version of the same talk. George Dvorsky blogged the result:

Convergence08: Anders Sandberg on Whole Brain Emulation

The term ‘whole brain emulation’ sounds more scientific than it does science fiction like, which may bode well for its credibility as a genuine academic discipline and area for inquiry.

Sandberg presented his whole brain emulation roadmap which had a flowchart like quality to it — which he quipped must be scientific because it was filled with arrows.

Simulating memory could be very complex, possibly involving chemical transference in cells or drilling right down to the molecular level. We may even have to go down to the quantum level, but no neuroscientist that Anders knows takes that possibility seriously…

As Anders himself told me afterwards,

…interest was high but time limited – I got a lot of useful feedback and ideas for making the presentation better.

I’m expecting a fascinating discussion.

19 November 2008

New mobile OSes mean development nightmares

Filed under: collaboration, fragmentation, Future of Mobile, innovation — David Wood @ 11:30 pm

Over on TechRadar, Dan Grabham has commented on one of the themes from Monday’s Future of Mobile event in the Great Hall in High Street Kensington, London:

The increase in mobile platforms caused by the advent of the Apple iPhone and Google’s Android are posing greater challenges for those who develop for mobile. That was one of the main underlying themes of this week’s Future of Mobile conference in London.

Tom Hume, Managing Director of developer Future Platforms, picked up on this theme, saying that from a development point of view things were more fragmented. “It’s clear that it’s an issue for the industry. I think it’s actually got worse in the last year or so.”

Indeed, many of the panellists representing the major OS vendors said that they expected some kind of consolidation over the coming years as completion in the mobile market becomes ever fiercer.

The theme of collaboration vs. competition was one that I covered in my own opening remarks on this panel. Before the conference, the panel chairman, Simon Rockman of Sony Ericsson, had asked the panellists to prepare a five minute intro. I’ll end this posting with a copy of what I prepared.

Before that, however, I have another comment on the event. One thing that struck me was the candid comments from many of the participants about the dreadful user experience that mobile phones deliver. So the mobile industry has no grounds for feeling pleased with itself! This was particularly emphasised during the rapid-fire “bloggers 6×6 panel”, which you can read more about from Helen Keegan’s posting – provocatively entitled “There is no future of mobile”. By the way, Helen was one of the more restrained of that panel!

So, back to my own remarks – where I intended to emphasise that, indeed, we face hard problems within our industry, and need new solutions:

This conference is called the Future of Mobile – not the Present Day of Mobile – so what I want to talk about is developments in mobile operating systems that will allow the mobile devices and mobile services of, say, 5 years time – 2013 – to live up to their full potential.

I believe that the mobile phones of 2013 will make even the most wonderful phones of today look, in comparison, jaded, weak, slow, and clunky. It’s my expectation that the phones used at that time, not just by technology enthusiasts and early adopters, but also by mainstream consumers, will be very considerably more powerful, more functional, more enchanting, more useful, more valuable, and more captivating than today’s smartphones.

To get there is going to require a huge amount of sophisticated and powerful software to be developed. That’s an enormous task. To get there, I offer you three contrasts.

The first contrast is between cooperation and competition.

The press often tries to portray some kind of monster, dramatic battle of mobile operating systems. In this battle, the people sitting around this table are fierce competitors. It’s the kind of thing that might sell newspapers. But rather than competition, I’m more interested in collaboration. The problems that have to be solved, to create the best possible mobile phone experiences of the next few years, will require cooperation between the people in the companies and organisations represented around this table – as well as with people in those companies and organisations that don’t have seats here at this moment, but which also play in our field. Instead of all of us working at odds with each other, spreading our energies thinly, creating incomplete semi-satisfactory solutions that are at odds with each, it would be far better for us to pool more of our energies and ideas.

I’m not saying that all competition should be stopped – far from it. An element of competition is vital, to prevent a market from becoming stale. But we’ve got too much of it just now. We’ve got too many operating systems that are competing with each other, and we’ve got different companies throughout the value chain competing with each other too strongly.

Where the industry needs to reach is around 3 or 4 major mobile operating systems – whereas today the number is somewhere closer to 20 – or closer to 200, if you count all the variants and value-chain complications. It’s a fragmentation nightmare, and a huge waste of effort.

As the industry consolidates over the next few years, I have no doubt that Symbian OS will be one of the small number of winning platforms. That brings me to my second contrast – the contrast between old and new – between past successes and future successes.

Last year, Symbian was the third most profitable software company in the UK. We earned licensing revenues of over 300 million dollars. We’ve been generating substantial cash for our owners. We’re in that situation because of having already shipped one quarter of a billion mobile phones running our software. There are at present some 159 different phone models, from 7 manufacturers, shipping on over 250 major operator networks worldwide. That’s our past success. It grows out of technology that’s been under development for 14 years, with parts of the design dating back 20 years.

But of course, past success is no guarantee of future success. I sometimes hear it said that Symbian OS is old, and therefore unsuited to the future. My reply is that many parts of Symbian OS are new. We keep on substantially improving it and refactoring it.

For example, we introduced a new kernel with enhanced real-time capabilities in version 8.1b. We introduced a substantial new platform security architecture in v9.0. More recently, there’s a new database architecture, a new Bluetooth implementation, and new architectures for IP networking and multi-surface graphics. We’re also on the point of releasing an important new library of so-called “high level” programming interfaces, to simplify developers’ experience with parts of the Symbian OS structure that sometimes pose difficulty – like text descriptors, active objects, and two-phase object construction and cleanup. So there’s plenty of innovation.

The really big news is that the pace of innovation is about to increase markedly – for three reasons, all tied up with the forthcoming creation of the Symbian Foundation:

  1. The first reason is a deeper and more effective collaboration between the engineering teams in Symbian and S60. This change is happening because of the acquisition of Symbian by Nokia. By working together more closely, innovations will reach the market more quickly.
  2. The second reason is because of a unification of UI systems in the Symbian space. Before, there were three UI systems – MOAP in Japan, UIQ, and S60. Now, given the increased flexibility of the latest S60 versions, the whole Symbian ecosystem will standardise on S60.
  3. The third reason is because of the transition of the Symbian platform – consisting of Symbian OS together with the S60 UI framework and applications – into open source. By adopting the best principles of open source, Symbian expects to attract many more developers than before to participate in reviewing and improving and creating new Symbian platform code. So there will be more innovation than before.

This brings me to the third of the three contrasts: openness vs. maturity.

Uniquely, the Symbian platform has a stable, well-tested, battle-hardened software base and software discipline, that copes well with the hard, hard task of large-scale software integration, handling input from many diverse and powerful customers.

Because of that, we’ll be able to cope with the flood of innovation that open source will send our way. That flood will lead to great progress for us, whereas for some other software systems, it will probably lead to chaos and fragmentation.

In summary, I see the Symbian platform as being not just one of several winners in the mobile operating system space, but actually the leading winner – and being the most widely used software platform on the planet, shipping in literally billions of great mobile devices. We’ll get there, because we’ll be at the heart of a huge community of impassioned and creative developers – the most vibrant developer ecosystem on the planet. Although the first ten years of Symbian’s history has seen many successes, the next ten years will be dramatically better.

Footnote: For other coverage of this event, see eg Tom Hume, Andrew Grill, Vero Pepperrell, Jemima Kiss, Dale Zak, and a very interesting Twitter channel (note to self: it’s time for me to stop resisting Twitter…)

16 November 2008

Schrodinger’s Rabbits

Filed under: books, multiverse, philosophy, quantum mechanics — David Wood @ 9:22 pm

Long before I ever heard of smartphones, or the C++ programming language, or even C, I was intrigued by quantum mechanics. In November 1979, as a sophomore undergraduate, I was fascinated to read an article in the latest edition of the Scientific American: “The Quantum Theory and Reality”, written by French theoretical physicist Bernard d’Espagnat. As recorded in the Wikipedia article on d’Espagnat, this article contains the stunning quote,

The doctrine that the world is made up of objects whose existence is independent of human consciousness turns out to be in conflict with quantum mechanics and with facts established by experiment.

What particularly struck me was the claim that “facts established by experiment” were at odds with common-sense ideas about reality. These experiments involved the now-famous “correlation at a distance” experiments inspired by a paper originally authored in 1935 by Albert Einstein and two co-workers: Boris Podolsky and Nathan Rosen. The initials of the authors – EPR – became synonymous with these experiments. Particularly when viewed through the analysis of John Bell, who devised some surprisingly counter-intuitive inequalities applicable to correlations between results in EPR experiments, these experiments seemed to defy all explanation.

Early in 1980, Professor Mary Hesse of the History and Philosophy of Science department at Cambridge, gave one of the then-frequent lunchtime presentations on mathematical topics, to students (like me) sufficiently interested in such topics to give up their free time in pursuit of greater understanding of mathematics. Prof Hesse chose the philosophical problems of quantum mechanics as her subject for the meeting. I listened carefully, to find out if there were any good rebuttals to the claims made by d’Espagnat. My conclusion was that the whole area was decidedly weird. As months passed, I also asked various maths lecturers about this – but their advice was generally not to think about these questions!

Several years later, I chose Philosophy of Science as the area for my postgraduate studies, with a particular focus on trying to make sense of quantum mechanics. During that time, I even made my first trip to Finland – not to visit Nokia (since I had never heard of them at that time), but to attend a conference in 1985 in pictureseque Joensuu. It was a conference to commemorate 50 years since the publication of the EPR paper. Nathan Rosen, then aged 76, was the guest of honour.

The more I studied the philosophical problems of quantum mechanics, the more I came to respect what initially seemed to be the weirdest and most unlikely solution of all. This is the so-called “Many worlds” interpretation (though, as it turns out, the name is misleading):

  • Originally proposed by Hugh Everett III, in 1957;
  • It refuses to introduce some kind of demarcation between the quantum realm, where superposition (“wavelike behaviour”) is allowed, and the classical realm, where things need to be more definite;
  • Instead, it takes very seriously the idea that macroscopically large objects also spread out over a range of diverse states – in a so-called quantum superposition;
  • This includes the shocking and apparently absurd notion that even we humans end up (all the time) in a superposition of different states;
  • For example, although I subjectively feel, as I type these words now, that this is the unique instance of myself, there are countless other instances of myself, spread out in a wider multiverse, all having diverged from this particular instance as a result of cascading quantum interactions;
  • In some of these other instances, I am employed by companies other than Symbian (my employer for the last ten years in this instance); in yet other instances, Symbian was never created, or I remained in academia instead of joining the world of business, or human civilisation was destroyed when the Cuban missile crisis went wrong, or the values of physical constants were not capable of giving rise to complex mater – and so on.

If objections to this idea come to your mind, it’s very likely that the same objections came to my mind during the years I pursued my postgraduate studies. For example, to the objection “why don’t we feel ourselves splitting”, comes the reply given by Hugh Everett himself:

Well, Copernicus made the analysis that the Earth was moving around the sun, undoing thousands of years of belief that the sun was going around the Earth, and people asked him, If the Earth is moving around the sun, then why don’t I feel the Earth move?

In time, I deprioritised my postgraduate studies, to take a series of jobs, first as a part-time university supervisor, then as a maths tutor at a sixth form college, and then (from 1988) as a software engineer. But occasionally, I come across a link that re-awakens my fascination with quantum theory and the many worlds interpretation. Recently, there have been quite a lot of these links:

  • The son of Hugh Everett is a reasonably famous singer and guitarist in his own right – Mark Everett, also sometimes known as “Mr E” or just “E”;
  • Mark Everett has just released an autobiography “Things the Grandchildren Should Know” which addresses his growing awareness of his father’s remarkable thinking (Hugh Everett died, of a heart attack, in 1982, when Mark was just 19);
  • There has also been a PBS documentary on this same topic, “Parallel worlds, parallel lives“, which has generated considerable media interest (such as this piece in the Scientific American);
  • Coincidentally, various conferences have taken place in the last year or so, commemorating the fiftieth anniversary of Everett’s original thesis;
  • For example, several people I remember from my own postgraduate studies days took part in a conference “Everett at 50” at Oxford.

With this growing abundance of material about Everett’s ideas, I’d like to highlight what I believe to be among the best book on the subject. It’s “Schrodinger’s Rabbits: The Many Worlds of Quantum“, written by Colin Bruce. It deserves to be a lot better known:

  • The author has a pleasant writing style, mixing in detective story writing and references to science fiction stories, with analysis of philosophical ideas;
  • There’s no complex maths to surmount – though the reader will have to think carefully, going through various passages (the effort is worth it!);
  • Unlike many books which seem to repeat the same few themes spread over many chapters, each chapter in this book introduces important new concepts – which is another reason why it’s rewarding to read it;
  • The book highlights some significant difficulties faced by the many worlds theories, but still (in my view) makes it clear that these theories are more likely to be true than false.

Alternatively, for a book that is even wider in its scope (though less convincing in some of its arguments), try “The Fabric of Reality: The Science of Parallel Universes and Its Implications” by David Deutsch – who in addition to breaking new ground in thinking about the philosophy of quantum mechanics, also happens to be a pioneer of the theory of quantum computing.

Finally, for a book that generally leaves readers in no doubt that any “common sense” interpretation of quantum mechanics fails, take a look at the stunningly well-written “Quantum Reality: Beyond the New Physics” by Nick Herbert.

11 November 2008

Symbian Partner Event, San Francisco, 4th Dec

Filed under: Events, partners, Symbian Foundation — David Wood @ 1:59 pm

Historically, admission to Symbian Partner Events has been restricted to signed-up members of Symbian’s Partner Network. However, for our event at the Palace Hotel in San Francisco on Thursday 4th December, we’re going to open up participation.

Some parts of the day will still be restricted to signed partners. However, most of the proceedings on the day will be open to a wider group of attendees – such as mobile developers, journalists, the open source community, and representatives of companies that may be considering partnering with Symbian.

Space will be limited so anyone thinking of attending should register their interest as soon as possible via the event website.

Full details of speakers, panellists, and other sessions at the event will be published on the event website shortly. In the meantime, here are a few highlights:

  • Keynote presentations from a leading member of the open source community, senior representatives from network operators and phone manufacturers, Symbian executives, and the management of the Symbian Foundation;
  • “Fast Forward” technology seminars
  • An open roundtable discussion on “Succeeding in the US: the key factors”
  • “Symbian Foundation Platform Architecture Overview”
  • “Symbian Foundation Q&A”.

There will also be an exhibition of partner products and solutions, as well as ample opportunity to network with movers-and-shakers of the global mobile industry.

Footnote: Here’s the LinkedIn entry for this event.

3 November 2008

Mobile 2.0 keynote

Filed under: developer experience, fragmentation, integration, Open Source, vision — David Wood @ 11:32 pm

Earlier today, I had the privilege to deliver the opening keynote at the Mobile 2.0 event in San Francisco. This posting consists of a copy of the remarks I prepared.

The view from 2013

My topic is Open Source, as a key catalyst for Mobile Innovation 2.0.

Let’s start by fast forwarding five years into the future. Imagine that we are gathered for the 2013 “Mobile 2.0” conference – though the name may have changed by that time, perhaps to Mobile 3.0 or even Mobile 4.0 – and perhaps the conference will be taking place virtually, with much less physical transportation involved.

Five year into the future, we may look back at the wonder devices of today, 2008: the apparently all-conquering iPhone, the Android G1, the Nokia E71, the latest and greatest phones from RIM, Windows Mobile, and so on: all marvellous devices, in their own ways. From the vantage point of 2013, I expect that our thoughts about these devices will be: “How quaint! How clunky! How slow! How did we put up with all the annoyances and limitations of these devices?”

This has happened before. When the first Symbian OS smartphones reached the market in 2002 – the Nokia 7650, and the Sony Ericsson P800 – they received rave reviews in many parts of the world. These Symbian smartphones were celebrated at the time as breakthrough devices with hitherto unheard of capabilities, providing great user experiences. It is only in retrospect that expectations changed and we came to see these early devices as quaint, clunky, and slow. It will be the same with today’s wonder phones.

Super smart phones

That’s because the devices of five years time will (all being well) be so much more capable, so much slicker, so much more usable, and so much more enchanting than today’s devices. If today’s devices are smart phones, the devices of 2013 will be super smart phones.

These devices will be performing all kinds of intelligent analysis of data they are receiving through their sensors: their location and movement sensors, their eyes – that is, their cameras – and their ears – that is, their always-on recording devices.

They’ll also have enormous amounts of memory – both on-board and on-network. Based on what they’re sensing, and on what they know, and on their AI (artificial intelligence) algorithms, they’ll be guiding us and informing us about all the things that are important to us. They’ll become like our trusted best friends, in effect whispering insight into our ears.

We can think of these devices as like our neo neo-cortex. Just as primates and especially humans have benefited from the development of the neo-cortex, as the newest part of our brains in evolutionary terms, so will users of super smartphones benefit from the enhanced memory, calculation powers, and social networking capabilities of this connected neo neo-cortex.

In simple terms, these devices can be seen as adding (say) 20 points to our IQs – perhaps more. If today’s smartphones can make their users smarter, the super smartphones of 2013 can make their users super smart.

Solving hard problems

That’s the potential. But the reality is that it’s going to be tremendously hard to achieve that vision. It’s going to require an enormously sophisticated, enormously capable, mobile operating system.

Not everyone shares my view that operating systems are that important. I sometimes hear the view expressed that improvements in hardware, or the creation of new managed code environments, somehow reduce the value of operating systems, making them into a commodity.

I disagree. I strongly disagree. It’s true that improvements in both hardware and managed code environments have incredibly important roles to play. But there remain many things that need to be addressed at the operating system level.

Here are just some of the hard tasks that a mobile operating system has to solve:

  • Seamless switching between different kinds of wireless network – something that Symbian’s FreeWay technology does particularly well;
  • Real-time services, not only handling downloads of extremely large quantities of data, but also manipulating that data in real time – decompressing it, decrypting it, displaying it on screen, storing it in the file system, etc;
  • All this must happen without any jitter or delay – even though there may be dozens of different applications and services all talking at the same time to several different wireless networks;
  • All this rich functionality must be easily available to third party developers;
  • However, that openness of developer access must coexist with security of the data on the device and the integrity of the wireless networks;
  • And, all this processing must take place without draining the batteries on the device;
  • And without bamboozling the user due to the sheer scale and complexity of what’s actually happening;
  • And all this must be supported, not just for one device, but in a way that can be customised and altered, supporting numerous different form factors and usage models, without fragmenting the platform.

Finally, please note that all this is getting harder and more complex: every year, the amount of software in a top-range phone approximately nearly doubles. So in five years, there’s roughly up to 32 times as much software in the device. In ten years, there could be 1000 times as much software.

Engaging a huge pool of productive developers

With so many problems that need to be solved, I will say this. The most important three words to determine the long-term success of any mobile operating system are: Developers, Developers, Developers. To repeat: the most important three words for the success of the Symbian platform are: Developers, Developers, Developers.

We need all sorts and shapes and sizes of developers – because, as I said, there are so many deep and complex problems to be solved, as the amount of software in mobile phone platforms grows and grows.

No matter how large and capable any one organisation is, the number of skilled developers inside that organisation is only a small fraction of the number outside. So it comes down to enabling a huge pool of productive and engaged developers, outside the organisation, to work alongside the original developers of the operating system – with speed, creativity, skill, and heartfelt enthusiasm. That’s how we can collectively build the successful super smart phones of the future.

Just two weeks ago, the annual Symbian Smartphone Show put an unprecedented focus on developers. We ran a Mobile DevFest as an integral part of the main event. We announced new developer tools, such as the Symbian Analysis Workbench. We will shortly be releasing new sets of developer APIs (application programming interfaces) in new utility libraries, to simplify interactions with parts of the Symbian programming system that have been found to cause the most difficulty – such as text descriptors, two phase object construction and cleanup, and active objects.

The critical role of open source

But the biggest step to engage and enthuse larger numbers of developers is to move the entire Symbian platform into open source.

This will lower the barriers of entry – in fact, it will remove the barriers of entry. It will allow much easier study of the source code, and, critically, much easier participation in the research and creation of new Symbian platform software. We are expecting a rapid increase in collaboration and innovation. This will happen because there are more developers involved, and more types of developers involved.

That’s why the title of my talk this morning is “Open Source: Catalyst for Mobile Innovation 2.0”. The “2.0” part means greater collaboration and participation than ever before: people not just using the code or looking at the code, but recommending changes to it and even contributing very sizeable chunks of new and improved code. The Open Source part is one of the key enablers for this transformation.

Necessary, but not sufficient

However, Open Source is only one of the necessary enablers. Open Source is a necessary but not sufficient ingredient. There are two others:

  1. A stable and mature software base, with reliable processes of integration, which I’ll talk more about in a moment;
  2. Mastery of the methods of large-scale Agile development, allowing rapid response to changing market needs.

Fragmentation inside an operating system

Here’s the problem. Fragmentation is easy, but Integration is hard.

Fragmentation means that different teams or different customers pull the software in different directions. You end up, in the heat of development in a fast-moving market, with different branches, that are incompatible with each other, and which can’t be easily joined together. The result is that solutions created by developers for one of these branches, fail to work on the other branches. A great deal of time can be wasted debugging these issues.

Here, I speak from bitter experience. During the rapid growth days of Symbian, we lost control of aspects of compatibility in our own platform – despite our best efforts. For example, we had one version called 7.0s and another called 7.0, but lots of partners reported huge problems moving their solutions between these two versions. Because of resulting project delays, major phones failed to come to the market. It was a very painful period.

Nowadays, in the light of our battle-hardened experience, Symbian OS is a much more mature and stable platform, and it is surrounded and supported by an ecosystem of very capable partners. In my view, we have great disciplines in compatibility management and in codeline management.

The result is that we have much better control over the integration of our platform. That puts us in a better position to handle rapid change and multiple customer input. That means we can take good advantage of the creativity of open source, rather than being pulled apart by the diverse input of open source. Other platforms may find things harder. For them, open source may bring as many problems as it brings solutions.

Fragmentation across operating systems

This addresses the fragmentation inside a single operating system. But the problem remains of fragmentation across different operating systems.

Although competition can be healthy, too many operating systems result in developers spreading their skills too thinly. The mobile industry recognises that it needs to consolidate on a smaller number of mobile operating systems moving forwards. The general view is that there needs to be consolidation on around three or (at the most) four advanced mobile operating systems. Otherwise the whole industry ends up over-stretched.

So, which will be the winning mobile operating systems, over the next five years? In the end, it will come down to which phones are bought in large quantities by end users. In turn, the choices offered to end users are strongly influenced by decisions by phone manufacturers and network operators about which operating systems to prefer. These companies have four kinds of issues in their minds, which they want to see mobile operating systems solve:

  • Technical issues, such as battery life, security, and performance, as well as rich functionality;
  • Commercial issues, such as cost, and the ability to add value by differentiation;
  • Political issues, in which there can be a perception that the future evolution of an operating system might be controlled by a company or organisation with divergent ulterior motivations;
  • Reliability issues, such as a proven track record for incrementally delivering new functionality at high quality levels in accordance with a pre-published roadmap.

A time for operating systems to prove themselves

Again, which will be the winning operating systems, over the next five years? My answer is that is slightly too early to say for sure. The next 12-18 months will be a time of mobile operating systems proving themselves. Perhaps three or four operating systems will come through the challenge, and will attract greater and greater support, as customers stop hedging their bets. Others will be de-selected (or merged).

For at least some of the winning smartphone operating systems, there will be an even bigger prize, in the subsequent 2-3 years. Provided these operating systems are sufficiently scalable, they will become used in greater and greater portions of all phones (not just smartphones and high-end feature phones).

Proving time for the Symbian Foundation platform

Here’s how I expect the Symbian platform to prove itself in the next 12-18 months. Our move to open source was announced in June this year, and we said it could take up to two years to complete it. Since then, planning has been continuing, at great speed. Lee Williams, current head of the S60 organisation in Nokia, and formerly of Palm Inc and Be Inc, has been announced as the Executive Director of the Symbian Foundation.

The Foundation will make its first software release midway through the first half of 2009. Up till that point, access to internal Symbian OS source code is governed by our historical CustKit Licence and DevKit Licence. There’s a steep entry price, around 30,000 dollars per year, and a long contract to sign to gain access, so the community of platform developers has been relatively small. From the first Symbian Foundation release, that will change.

The source code will be released under two different licenses. Part will be open source, under the Eclipse Public Licence. This part has no licence fee, and is accessible to everyone. The other part will be community source, under an interim Symbian Foundation Licence. This is also royalty free, but there is a small contract that companies have to sign, and a small annual fee of 1,500 dollars. I expect a large community to take advantage of this.

This interim community source part will diminish, in stages, until it vanishes around the middle of 2010. By then, everything will be open source. We can’t get there quicker because there are 40 million lines of source code altogether, and we need to carry out various checks and cleanups and contract renegotiations first. But we’ll get there as quickly as we can.

There’s one other important difference worth highlighting. It goes back to the theme of reducing fragmentation. Historically, there have been three different UIs for Symbian OS: S60, UIQ, and MOAP(S) used in Japan. But going forwards, there will only be one UI system: S60, which is nowadays flexible enough to support the different kinds of user models for which the other UI systems were initially created.

To be clear, developers don’t have to wait until 2010 before experimenting with this software system. Software written to the current S60 SDK will run fine on these later releases. We’ll continue to make incremental compatible releases throughout this time period.

What you should also see over this period is that the number of independent contributors will increase. It won’t just be Nokia and Symbian employees who are making contributions. It will be like the example of the Eclipse Foundation, in which the bulk of contributions initially came from just one company, IBM, but nowadays there’s a much wider participation. So also for the Symbian Foundation, contributions will be welcome based on merit. And the governance of the Foundation will also be open and transparent.

The view from 2013

I’ll close by returning to the vision for 2013. Inside Symbian, we’ve long had the vision that Symbian OS will be the most widely used software platform on the planet. By adopting the best principles of open source, we expect we will fulfil this vision. We expect there will in due course be, not just 100s of millions, but billions of great devices, all running our software. And we’ll get there because we’ll be at the heart of what will be the most vibrant software ecosystem on the planet – the mobile innovation ecosystem. Thank you very much.

31 October 2008

Watching Google watching the world

Filed under: books, collaboration, Google, infallibility, mobile data — David Wood @ 6:07 pm

If there was a prize for the best presentation at this week’s Informa “Handsets USA” forum in San Diego, it would have to go to Sumit Agarwal, Product Manager for Mobile from Google. Although there were several other very good talks there, Sumit’s was in a class of its own.

In the first place, Sumit had the chutzpah to run his slides directly on a mobile device – an iPhone – with a camera relaying the contents of the mobile screen to the video projector. Second, the presentation included a number of real-time demos – which worked well, and even the ways in which they failed to work perfectly became a source of more insight for the audience (I’ll come back to this point later). The demos were spread among a number of different mobile devices: an Android G1, the iPhone, and a BlackBerry Bold. (Sumit rather cheekily said that the main reason he carried the Bold was for circumstances in which the G1 and the iPhone run out of battery power.)

One reason the talk oozed authority was because Sumit could dig into actual statistics, collected on Google’s servers.

For example, the presentation included a graph showing the rate of Google search inquiries from mobile phones on different (anonymised) North American network operators. In September 2007, one of the lines started showing an astonishing rhythm, with rapid fluctuations in which the rate of mobile search inquiries jumped up sevenfold – before dropping down again a few days later. The pattern kept repeating, on a weekly basis. Google investigated, and found that the network operator in question had started an experiment with “free data weekends”: data usage would be free of charge on Saturday and Sunday. As Sumit pointed out:

  • The sharp usage spikes showed the latent demand of mobile users for carrying out search enquiries – a demand that was previously being inhibited by fear of high data charges;
  • Even more interesting, this line on the graph, whilst continuing to fluctuate drastically at weekends, also showed a gradual overall upwards curve, finishing up with data usage signifcantly higher than the national average, even away from weekends;
  • The takeaway message here is that “users get hooked on mobile data”: once they discover how valuable it can be to them, they use it more and more – provided (and this is the kicker) the user experience is good enough.

Another interesting statistic involved the requests received by Google’s servers for new “map tiles” to provide to Google maps applications. Sumit said that, every weekend, the demand from mobile devices for map tiles reaches the same level as the demand from fixed devices. Again, this is evidence of strong user interest for mobile services.

As regards the types of textual search queries received: Google classifies all incoming search queries, into categories such as sports, entertainment, news, and so on. Sumit showed spider graphs for the breakdown of search queries into categories. The surprising thing is that the spider graph for mobile-originated search enquiries had a very similar general shape to that for search enquiries from fixed devices. In other words, people seem to want to search for the same sorts of things – in the same proportion of times – regardless of whether they are using fixed devices or mobile ones.

It is by monitoring changes in server traffic that Google can determine the impacts of various changes in their applications – and decide where to prioritise their next efforts. For example, when the My Location feature was added to Google’s Mobile Maps application, it had a “stunning impact” on the usage of mobile maps. Apparently (though this could not be known in advance), many users are fascinated to track how their location updates in near real-time on map displays on their mobile devices. And this leads to greater usage of the Google Maps product.

Interspersed among the demos and the statistics, Sumit described elements of Google’s underlying philosophy for success with mobile services:

  • “Ignore the limitations of today”: don’t allow your thinking to be constrained by the shortcomings of present-day devices and networks;
  • “Navigate to where the puck will be”: have the confidence to prepare services that will flourish once the devices and networks improve;
  • “Arm users with the data to make decisions”: instead of limiting what users are allowed to do on their devices, provide them with information about what various applications and services will do, and leave it to the users to decide whether they will install and use individual applications;
  • “Dare to delight” the user, rather than always seeking to ensure order and predictability at all times;
  • “Accept downside”, when experiments occasionally go wrong.

As an example of this last point, there was an amusing moment during one of the (many) demos in the presentation, when two music-playing applications each played music at the same time. Sumit had just finished demoing the remarkable TuneWiki, which allows users to collaborate in supplying, sharing, and correcting lyrics to songs, for a Karaoke-like mobile experience without users having to endure the pain of incorrect lyrics. He next showed an application that searched on YouTube for videos matching a particular piece of music. But TuneWiki continued to play music through the phone speakers whilst the second application was also playing music. Result: audio overlap. Sumit commented that an alternative design philosophy by Google might have ensured that no such audio overlap could occur. But such a constraint would surely impede the wider flow of innovation in mobile applications.

And there was a piece of advice for application developers: “emphasise simplicity”. Sumit demoed the “AroundMe” application by TweakerSoft, as an illustration of how a single simple idea, well executed, can result in large numbers of downloads. (Sumit commented: “this app was written by a single developer … who has probably quintupled his annual income by doing this”.)

Google clearly have a lot going for them. Part of their success is no doubt down to the technical brilliance of their systems. The “emphasise simplicity” message has helped a great deal too. Perhaps their greatest asset is how they have been able to leverage all the statistics their enormous server farms have collected – not just statistics about links between websites, but also statistics about changes in user activity. By watching the world so closely, and by organising and analysing the information they find in it, Google are perhaps in a unique position to identify and improve new mobile services.

Just as Google has benefited from watching the world, the rest of the industry can benefit from watching Google. Happily, there’s already a great deal of information available about how Google operates. Anyone concerned about whether Google might eat their lunch can become considerably wiser from taking the time to read some of the fascinating books that have been written about both the successes and (yes) the failures of this company:

I finished reading the Stross book a couple of weeks ago. I found it an engrossing easy-to-read account of many up-to-date developments at Google. It confirms that Google remains an utterly intriguing company:

  • For example, one thought-provoking discussion was the one near the beginning of the book, about Google, Facebook, and open vs. closed;
  • I also valued the recurring theme of “algorithm-driven search” vs. “human-improved search”.

It was particularly interesting to read what Stross had to say about some of Google’s failures – eg Google Answers and Google Video (and arguably even YouTube), as a balance to its better-known string of successes. It’s a reminder that no company is infallible.

Throughout most of the first ten years of Symbian’s history, commentators kept suggesting that it was only a matter of time before the mightiest software company of that era – Microsoft – would sweep past Symbian in the mobile phone operating system space (and, indeed, would succeed – perhaps at the third attempt – in every area they targeted). Nowadays, commentators often suggest the same thing about Google’s Android solution.

Let’s wait and see. And in any case, I personally prefer to explore the collaboration route rather than the head-on compete route. Just as Microsoft’s services increasingly run well on Symbian phones, Google’s services can likewise flourish there.

29 October 2008

A market for different degrees of openness

Filed under: openness, regulation, Wireless Influencers — David Wood @ 3:52 am

To encourage participants to speak candidly, the proceedings at the Rutberg “Wireless Influencers” conferences are held away from the prying eyes of journalists. A few interesting ideas popped up during the discussions at the 2008 event over the last two days – but because of the confidentiality rules, I’m not able to name the people who raised these ideas (so I can’t give credit where credit is due).

The common theme of these ideas is the clash of openness and regulation – and (in some cases) the attempt to find creative solutions to this clash.

The first example arose during a talk by a representative from a major operator. The talk described the runaway success one of their products was experiencing in a third world country. This product involves the use of mobile phones to transfer money. The speaker said that the main reason this product could not be deployed in more developed countries (to address use cases like simplifying the payment of money to a teenage baby sitter, or transfering cash to your children) is the deadhand of financial regulations: banks aren’t keen to allow operators to take over some of the functions that have traditionally been restricted to banks, so operators are legally barred from deploying these applications.

I found this ironic. Normally operators are the companies that are criticised for setting up regulatory systems that have the effect of maintaining their control over various important business processes (and thereby preserving their profits). But in this case, it was an operator who was criticising another part of industry for self-interestedly sheltering behind regulations.

Later in the day, one of the streams at the event discussed whether operators could ever allow users to install whatever applications they want, on their phones. The analogy was made with the world of the PC: the providers of network services for PCs generally have no veto over the applications which users choose to install. On the other hand, in some enterprise situations, a corporate IS department may well wish to impose that kind of control. In other words, for PCs, there is a range of different degrees of openness, depending on the environment. So, could a similar range of different degrees of openness be set up for mobile phones?

The idea here is that several different networks could form. In some, the network operator would impose restrictions on the applications that can be installed on the phones. In others, the network operators would be more permissive. In the second kind of network, users would be told that it was their own responsibility to deal with any unintended consequences from applications they installed.

Ideally, a kind of market would be formed, for networks that had different degrees of openness. Then we could let normal market dynamics determine which sort of network would flourish.

Could such a market actually be formed? Could closed networks and open networks co-exist? It seems worth thinking about.

And here’s one more twist – from a keynote discussion on the second day of the event. Rather than a network operator (or some other central certification authority) deciding which applications are suitable for installation on users’ phones, how about using the power of community ratings to push bad applications way down the list of available applications?

That’s an intriguing Web 2.0 kind of idea. On a network operating with this principle, most users would only see apps that had already received positive reviews. Apps that had bad consequences would instead receive bad reviews – and would therefore disappear off the bottom of the list of apps displayed in response to search queries. “Just like on YouTube”.

A market for different degrees of openness

Filed under: openness, regulation, Wireless Influencers — David Wood @ 3:52 am

To encourage participants to speak candidly, the proceedings at the Rutberg “Wireless Influencers” conferences are held away from the prying eyes of journalists. A few interesting ideas popped up during the discussions at the 2008 event over the last two days – but because of the confidentiality rules, I’m not able to name the people who raised these ideas (so I can’t give credit where credit is due).

The common theme of these ideas is the clash of openness and regulation – and (in some cases) the attempt to find creative solutions to this clash.

The first example arose during a talk by a representative from a major operator. The talk described the runaway success one of their products was experiencing in a third world country. This product involves the use of mobile phones to transfer money. The speaker said that the main reason this product could not be deployed in more developed countries (to address use cases like simplifying the payment of money to a teenage baby sitter, or transfering cash to your children) is the deadhand of financial regulations: banks aren’t keen to allow operators to take over some of the functions that have traditionally been restricted to banks, so operators are legally barred from deploying these applications.

I found this ironic. Normally operators are the companies that are criticised for setting up regulatory systems that have the effect of maintaining their control over various important business processes (and thereby preserving their profits). But in this case, it was an operator who was criticising another part of industry for self-interestedly sheltering behind regulations.

Later in the day, one of the streams at the event discussed whether operators could ever allow users to install whatever applications they want, on their phones. The analogy was made with the world of the PC: the providers of network services for PCs generally have no veto over the applications which users choose to install. On the other hand, in some enterprise situations, a corporate IS department may well wish to impose that kind of control. In other words, for PCs, there is a range of different degrees of openness, depending on the environment. So, could a similar range of different degrees of openness be set up for mobile phones?

The idea here is that several different networks could form. In some, the network operator would impose restrictions on the applications that can be installed on the phones. In others, the network operators would be more permissive. In the second kind of network, users would be told that it was their own responsibility to deal with any unintended consequences from applications they installed.

Ideally, a kind of market would be formed, for networks that had different degrees of openness. Then we could let normal market dynamics determine which sort of network would flourish.

Could such a market actually be formed? Could closed networks and open networks co-exist? It seems worth thinking about.

And here’s one more twist – from a keynote discussion on the second day of the event. Rather than a network operator (or some other central certification authority) deciding which applications are suitable for installation on users’ phones, how about using the power of community ratings to push bad applications way down the list of available applications?

That’s an intriguing Web 2.0 kind of idea. On a network operating with this principle, most users would only see apps that had already received positive reviews. Apps that had bad consequences would instead receive bad reviews – and would therefore disappear off the bottom of the list of apps displayed in response to search queries. “Just like on YouTube”.

26 October 2008

The Singularity will go mainstream

Filed under: AGI, brain simulation, cryonics, Moore's Law, robots, Singularity — David Wood @ 1:49 pm

The concept of the coming technological singularity is going to enter mainstream discourse, and won’t go away. It will stop being something that can be dismissed as freaky or outlandish – something that is of interest only to marginal types and radical thinkers. Instead, it’s going to become something that every serious discussion of the future is going to have to contemplate. Writing a long-term business plan – or a long-term political agenda – without covering the singularity as one of the key topics, is increasingly going to become a sign of incompetence. We can imagine the responses, just a few years from now: “Your plan lacks a section on how the onset of the singularity is going to affect the take-up of your product. So I can’t take this proposal seriously”. And: “You’ve analysed five trends that will impact the future of our company, but you haven’t included the singularity – so everything else you say is suspect.”

In short, that’s the main realisation I reached by attending the Singularity Summit 2008 yesterday, in the Montgomery Theater in San Jose. As the day progressed, the evidence mounted up that the arguments in favour of the singularity will be increasingly persuasive, to wider and wider groups of people. Whether or not the singularity will actually happen is a slightly different question, but it’s no longer going to be possible to dismiss the concept of the singularity as irrelevant or implausible.

To back up my assertion, here are some of the highlights of what was a very full day:

Intel’s CTO and Corporate VP Justin Rattner spoke about “Countdown to Singularity: accelerating the pace of technological innovation at Intel”. He described a series of technological breakthroughs that would be likely to keep Moore’s Law operational until at least 2020, and he listed ideas for how it could be extended even beyond that. Rattner clearly has a deep understanding of the technology of semiconductors.

Dharmendra Modha, the manager of IBM’s cognitive computing lab at Almaden, explained how his lab had already utilised IBM super-computers to simulate an entire rat brain, with the simulation running at one tenth of real-time speed. He explained his reasons for expecting that his lab should be emable to simular an entire human brain, running at full speed, by 2018. This was possible as a result of the confluence of “three hard disruptive trends”:

  1. Neuroscience has matured
  2. Supercomputing meets the brain
  3. Nanotechnology meets the brain.

Cynthia Breazeal, Associate Professor of Media Arts and Sciences, MIT, drew spontaneous applause from the audience part-way through her talk, by showing a video of one of her socially responsive robots, Leonardo. The video showed Leonardo acting on beliefs about what various humans themselves believed (including beliefs that Leonardo could deduce were false). As Breazeal explained:

  • Up till recently, robotics has been about robots interacting with things (such as helping to manufacture cars)
  • In her work, robotics is about robots interacting with people in order to do things. Because humans are profoundly social, these robots will also have to be profoundly social – they are being designed to relate to humans in psychological terms. Hence the expressions of emotion on Leonardo’s face (and the other body language).

Marshall Brain, founder of “How Stuff Works”, also spoke about robots, and the trend for them to take over work tasks previously done by humans: MacDonalds waitresses, Wal-Mart shop assistants, vehicle drivers, construction workers, teachers…

James Miller, Associate Professor of Economics, Smith College, explicitly addressed the topic of how increasing belief in the likelihood of an oncoming singularity would change people’s investment decisions. Once people realise that, within (say) 20-30 years, the world could be transformed into something akin to paradise, with much greater lifespans and with abundant opportunities for extremely rich experiences, many will take much greater care than before to seek to live to reach that event. Interest in cryonics is likely to boom – since people can reason their bodies will only need to be vitrified for a short period of time, rather than having to trust their descendants to look after them for unknown hundreds of years. People will shun dangerous activities. They’ll also avoid locking money into long-term investments. And they’ll abstain from lengthy training courses (for example, to master a foreign language) if they believe that technology will shortly render as irrelevant all the sweat of that arduous learning.

Not every speaker was optimistic. Well-known author and science journalist John Horgan gave examples of where the progress of science and technology has been, not exponential, but flat:

  • nuclear fusion
  • ending infectious diseases
  • Richard Nixon’s “war on cancer”
  • gene therapy treatments
  • treating mental illness.

Horgan chided advocates of the singularity for their use of “rhetoric that is more appropriate to religion than science” – thereby risking damaging the standing of science at a time when science needs as much public support as it can get.

Ray Kurzweil, author of “The Singularity is Near”, responded to this by agreeing that not every technology progresses exponentially. However, those that become information sciences do experience significant growth. As medicine and health increasingly become digital information sciences, they are experiencing the same effect. Although in the past I’ve thought that Kurzweil sometimes overstates his case, on this occasion I thought he spoke with clarity and restraint, and with good evidence to back up his claims. He also presented updated versions of the graphs from his book. In the book, these graphs tended to stop around 2002. The slides Kurzweil showed at the summit continued up to 2007. It does appear that the rate of progress with information sciences is continuing to accelerate.

Earlier in the day, science fiction author and former maths and computing science professor Vernor Vinge gave his own explanation for this continuing progress:

Around the world, in many fields of industry, there are hundreds of thousands of people who are bringing the singularity closer, through the improvements they’re bringing about in their own fields of research – such as enhanced human-computer interfaces. They mainly don’t realise they are advancing the singularity – they’re not working to an agreed overriding vision for their work. Instead, they’re doing what they’re doing because of the enormous incremental economic plus of their work.

Under questioning by CNBC editor and reporter Bob Pisani, Vinge said that he sticks with the forecast he made many years ago, that the singularity would (“barring major human disasters”) happen by 2030. Vinge also noted that rapidly improving technology made the future very hard to predict with any certainty. “Classic trendline analysis is seriously doomed.” Planning should therefore focus on scenario evaluation rather than trend lines. Perhaps unsurprisingly, Vinge suggested that more forecasters should read science fiction, where scenarios can be developed and explored. (Since I’m midway through reading and enjoying Vinge’s own most recent novel, “Rainbows End” – set in 2025 – I agree!)

Director of Research at the Singularity Institute, Ben Goertzel, described a staircase of potential applications for the “OpenCog” system of “Artificial General Intelligence” he has been developing with co-workers (partially funded by Google, via the Google Summer of Code):

  • Teaching virtual dogs to dance
  • Teaching virtual parrots to talk
  • Nurturing virtual babies
  • Training virtual scientists that can read vast swathes of academic papers on your behalf
  • And more…

Founder and CSO of Innerspace Foundation, Pete Estep, gave perhaps one of the most thought-provoking presentations. The goal of Innerspace is, in short, to improve brain functioning. In more detail, “To establish bi-directional communication between the mind and external storage devices.” Quoting from the FAQ on the Innerspace site:

The IF [Innerspace Foundation] is dedicated to the improvement of human mind and memory. Even when the brain operates at peak performance learning is slow and arduous, and memory is limited and faulty. Unfortunately, other of the brain’s important functions are similarly challenged in our complex modern world. As we age, these already limited abilities and faculties erode and fail. The IF supports and accelerates basic and applied research and development for improvements in these areas. The long-term goal of the foundation is to establish relatively seamless two-way communication between people and external devices possessing clear data storage and computational advantages over the human brain.

Estep explained that he was a singularity agnostic: “it’s beyond my intellectual powers to decide if a singularity within 20 years is feasible”. However, he emphasised that it is evident to him that “the singularity might be near”. And this changes everything. Throughout history, and extending round the world even today, “there have been too many baseless fantasies and unreasonable rationalisations about the desirability of death”. The probable imminence of the singularity will help people to “escape” from these mind-binds – and to take a more vigorous and proactive stance towards planning and actually building desirable new technology. The singularity that Estep desires is one, not of super-powerful machine intelligence, but one of “AI+BCI: AI combined with a brain-computer interface”. This echoed words from robotics pioneer Hans Moravec that Vernor Vinge had reported earlier in the day:

“It’s not a singularity if you are riding the curve. And I intend to ride the curve.”

On the question of how to proactively improve the chances for beneficial technological development, Peter Diamandis spoke outstandingly well. He’s the founder of the X-Prize Foundation. I confess I hadn’t previously realised anything like the scale and the accomplishment of this Foundation. It was an eye-opener – as, indeed, was the whole day.

« Newer PostsOlder Posts »

Blog at WordPress.com.