dw2

8 May 2011

Future technology: merger or trainwreck?

Filed under: AGI, computer science, futurist, Humanity Plus, Kurzweil, malware, Moore's Law, Singularity — David Wood @ 1:35 pm

Imagine.  You’ve been working for many decades, benefiting from advances in computing.  The near miracles of modern spreadsheets, Internet search engines, collaborative online encyclopaedias, pattern recognition systems, dynamic 3D maps, instant language translation tools, recommendation engines, immersive video communications, and so on, have been steadily making you smarter and increasing your effectiveness.  You  look forward to continuing to “merge” your native biological intelligence with the creations of technology.  But then … bang!

Suddenly, much faster than we expected, a new breed of artificial intelligence is bearing down on us, like a huge intercity train rushing forward at several hundred kilometres per hour.  Is this the kind of thing you can easily hop onto, and incorporate in our own evolution?  Care to stand in front of this train, sticking out your thumb to try to hitch a lift?

This image comes from a profound set of slides used by Jaan Tallinn, one of the programmers behind Kazaa and a founding engineer of Skype.  Jaan was speaking last month at the Humanity+ UK event which reviewed the film “Transcendent Man” – the film made by director Barry Ptolemy about the ideas and projects of serial inventor and radical futurist Ray Kurzweil.  You can find a video of Jaan’s slides on blip.tv, and videos (but with weaker audio) of talks by all five panelists on KoanPhilosopher’s YouTube channel.

Jaan was commenting on a view that was expressed again and again in the Kurzweil film – the view that humans and computers/robots will be able to merge, into some kind of hybrid “post-human”:

This “merger” viewpoint has a lot of attractions:

  • It builds on the observation that we have long co-existed with the products of technology – such as clothing, jewellery, watches, spectacles, heart pacemakers, artificial hips, cochlear implants, and so on
  • It provides a reassuring answer to the view that computers will one day be much smarter than (unmodified) humans, and that robots will be much stronger than (unmodified) humans.

But this kind of merger presupposes that the pace of improvement in AI algorithms will remain slow enough that we humans can remain in charge.  In short, it presupposes what people call a “soft take-off” for super-AI, rather than a sudden “hard take-off”.  In his presentation, Jaan offered three arguments in favour of a possible hard take-off.

The first argument is a counter to a counter.  The counter-argument, made by various critics of the concept of the singularity, is that Kurzweil’s views on the emergence of super-AI depend on the continuation of exponential curves of technological progress.  Since few people believe that these exponential curves really will continue indefinitely, the whole argument is suspect.  The counter to the counter is that the emergence of super-AI makes no assumption about the shape of the curve of progress.  It just depends upon technology eventually reaching a particular point – namely, the point where computers are better than humans at writing software.  Once that happens, all bets are off.

The second argument is that getting the right algorithm can make a tremendous difference.  Computer performance isn’t just dependent on improved hardware.  It can, equally, be critically dependent upon finding the right algorithms.  And sometimes the emergence of the right algorithm takes the world by surprise.  Here, Jaan gave the example of the unforeseen announcement in 1993 by mathematician Andrew Wiles of a proof of the centuries-old Fermat’s Last Theorem.  What Andrew Wiles did for the venerable problem of Fermat’s last theorem, another researcher might do for the even more venerable problem of superhuman AI.

The third argument is that AI researchers are already sitting on what can be called a huge “hardware overhang”:

As Jaan states:

It’s important to note that with every year the AI algorithm remains unsolved, the hardware marches to the beat of Moore’s Law – creating a massive hardware overhang.  The first AI is likely to find itself running on a computer that’s several orders of magnitude faster than needed for human level intelligence.  Not to mention that it will find an Internet worth of computers to take over and retool for its purpose.

Imagine.  The worst set of malware so far created – exploiting a combination of security vulnerabilities, other software defects, and social engineering.  How quickly that can spread around the Internet.  Now imagine an author of that malware that is 100 times smarter.  Human users will find themselves almost unable to resist clicking on tempting links and unthinkingly providing passwords to screens that look identical to the ones they were half-expecting to see.  Vast computing resources will quickly become available to the rapidly evolving, intensely self-improving algorithms.  It will be the mother of all botnets, ruthlessly pursing whatever are the (probably unforeseen) logical conclusions of the software that gave it birth.

OK, so the risk of hard take-off is very difficult to estimate.  At the H+UK meeting, the panelists all expressed significant uncertainty about their predictions for the future.  But that’s not a reason for inaction.  If we thought the risk of super-AI hard take-off in the next 20 years was only 5%, that would still merit deep thought from us.  (Would you get on an airplane if you were told the risk of it plummeting out of the sky was 5%?)

I’ll end with another potential comparison, which I’ve written about before.  It’s another example about underestimating the effects of breakthrough new technology.

On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  For example, imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as several thousand people.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  The more powerful our technology becomes, the more drastic the unintended consequences become.  Merger or trainwreck?  I believe the outcome is still wide open.

19 May 2010

Chapter finished: A journey with technology

Five more days have passed, and I’ve completed another chapter draft (see snapshot below) of my proposed new book.

This takes me up to 30% of what I hope to write:

  • I’ve drafted three out of ten planned chapters.
  • The wordcount has reached 15,000, out of a planned total of 50,000.

After this, I plan to dig more deeply into specific technology areas.  I’ll be moving further out of my comfort area.  First will be “Health”.  Fortuitously, I spent today at an openMIC meeting in Bath, entitled “i-Med: Serious apps for mobile healthcare”.  That provided me with some useful revision!

========

3. A journey with technology

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

Here’s the key question I want to start answering in this chapter: how quickly can technology progress in the next few decades?

This is far from being an academic question. At heart, I want to know whether it’s feasible for that progress to be quick enough to provide technological solutions to the calamitous issues and huge opportunities described in the first chapter of this book. The progress must be quick enough, not only for core technological research, but also for productisation of that technology into the hands of billions of consumers worldwide.

For most of this book, I’ll be writing about technologies from an external perspective. I have limited direct experience with, for example, the healthcare industry and the energy industry. What I have to say about these topics will be as, I hope, an intelligent outside observer. But in this chapter, I’m able to adopt an internal perspective, since the primary subject matter is the industry where I worked for more than twenty years: the smartphone industry.

In June 1988, I started work in London at Psion PLC, the UK-based manufacturer of electronic organisers. I joined a small team working on the software for a new generation of mobile computers. In the years that followed, I spent countless long days, long nights and (often) long weekends architecting, planning, writing, integrating, debugging and testing Psion’s software platforms. In due course, Psion’s software would power more than a million PDAs in the “Series 3” family of devices. However, the term “PDA” was unknown in 1988; likewise for phrases like “smartphone”, “palmtop computer”, and “mobile communicator”. The acronym “PDA”, meaning “personal digital assistant”, was coined by Apple in 1992 in connection with their ambitious but flawed “Newton” project – long before anyone conceived of the name “iPhone”.

I first became familiar with the term “smartphone” in 1996, during early discussions with companies interested in using Psion’s “EPOC32” software system in non-PDA devices. After a faltering start, these discussions gathered pace. In June 1998, ten years after I had joined Psion, a group of Psion senior managers took part in the announcement of the formation of a new entity, Symbian Ltd, which had financial backing from the three main mobile phone manufacturers of the era – Ericsson, Motorola, and Nokia. Symbian would focus on the software needs of smartphones. The initial software, along with 150 employees led by a 5 man executive team, was contributed by Psion. In the years that followed, I held Symbian executive responsibility, at different times, for Technical Consulting, Partnering, and Research. In due course, sales of devices based on Symbian OS exceeded 250 million devices.

In June 2008 – ten more years later, to the day – another sweeping announcement was made. The source code of Symbian OS, along with that of the S60 UI framework and applications from Nokia, would become open source, and would be overseen by a new independent entity, the Symbian Foundation.

My views on the possibilities for radical improvements in technology as a whole are inevitably coloured by my helter-skelter experiences with Psion and Symbian. During these 20+ years of intense projects following close on each others’ heels, I saw at first hand, not only many issues with developing and productising technology, but also many issues in forecasting the development and productisation of technology.

For example, the initial June 1998 business plans for Symbian are noteworthy both for what we got right, and for what we got wrong.

3.1 Successes and shortcomings in predicting the future of smartphones

In June 1998, along with my colleagues on the founding team at Symbian, I strove to foresee how the market for smartphones would unfold in the years ahead. This forecast was important, as it would:

  • Guide our own investment decisions
  • Influence the investment decisions of our partner companies
  • Set the context for decisions by potential employees whether or not to join Symbian (and whether or not to remain with Symbian, once they had joined).

Many parts of our vision turned out correct:

  • There were big growths in interest in computers with increased mobility, and in mobile phones with increased computing capability.
  • Sales of Symbian-powered mobile devices would, by the end of the first decade of the next century, be measured in 100s of millions.
  • Our phrase, “Smartphones for all”, which initially struck many observers as ridiculous, became commonplace: interest in smartphones stopped being the preserve of a technologically sophisticated minority, and became a mainstream phenomenon.
  • Companies in numerous industries realised that they needed strong mobile offerings, to retain their relevance.
  • Rather than every company developing its own smartphone platform, there were big advantages for companies to collaborate in creating shared standard platforms.
  • The attraction of smartphones grew, depending on the availability of add-on applications that delivered functionality tailored to the needs of individual users.

Over the next decade, a range of new features became increasingly widespread on mobile phones, despite early scepticism:

  • Colour screens
  • Cameras – and video recorders
  • Messaging: SMS, simple email, rich email…
  • Web browsing: Google, Wikipedia, News…
  • Social networking: Facebook, Twitter, blogs…
  • Games – including multiplayer games
  • Maps and location-based services
  • Buying and selling (tickets, vouchers, cash).

By 2010, extraordinarily powerful mobile devices are in widespread use in almost every corner of the planet. An average bystander transported from 1998 to 2010 might well be astonished at the apparently near-magical capabilities of these ubiquitous devices.

On the other hand, many parts of our 1998 vision proved wrong.

First, we failed to foresee many of the companies that would be the most prominent in the smartphone industry by the end of the next decade. In 1998:

  • Apple seemed to be on a declining trajectory.
  • Google consisted of just a few people working in a garage. (Like Symbian, Google was founded in 1998.)
  • Samsung and LG were known to the Symbian team, but we decided not to include them on our initial list of priority sales targets, in view of their lowly sales figures.

Second, although our predictions of eventual sales figures for Symbian devices were broadly correct – namely 100s of millions – this was the result of two separate mistakes cancelling each other out:

  • We expected to have a higher share of the overall mobile phone market (over 50% – perhaps even approaching 100%).
  • We expected that overall phone market to remain at the level of 100s of millions per annum – we did not imagine it would become as large as a billion per year.

(A smaller-than-expected proportion of a larger-than-expected market worked out at around the same volume of sales.)

Third – and probably most significant for drawing wider lessons – we got the timescales significantly wrong. It took considerably longer than we expected for:

  • The first successful smartphones to become available
  • Next generation networks (supporting high-speed mobile data) to be widely deployed
  • Mobile applications to become widespread.

Associated with this, many pre-existing systems remained in place much longer than anticipated, despite our predictions that they would fail to be able to adapt to changing market demands:

  • RIM sold more and more BlackBerries, despite repeated concerns that their in-house software system would become antiquated.
  • The in-house software systems of major phone manufacturers, such as Nokia’s Series 40, likewise survived long past predicted “expiry” dates.

To examine what’s going on, it’s useful to look in more detail at three groups of factors:

  1. Factors accelerating growth in the smartphone market
  2. Factors restricting growth in the smartphone market
  3. Factors that can overcome the restrictions and enable faster growth.

Having reviewed these factors in the case of smartphone technology, I’ll then revisit the three groups of factors, with an eye to general technology.

3.2 Factors accelerating growth in the smartphone market

The first smartphone sales accelerator is decreasing price. Smartphones increase in popularity because of price reductions. As the devices become less expensive, more and more people can afford them. Other things being equal, a desirable piece of consumer electronics that has a lower cost will sell more.

The underlying cost of smartphones has been coming down for several reasons. Improvements in underlying silicon technology mean that manufacturers can pack more semiconductors on to the same bit of space for the same cost, creating more memory and more processing power. There are also various industry scale effects. Companies who work with a mobile platform over a period of time gain the benefit of “practice makes perfect”, learning how to manage the supply chain, select lower price components, and assemble and manufacture their devices at ever lower cost.

A second sales accelerator is increasing reliability. With some exceptions (that have tended to fall by the wayside), smartphones have become more and more reliable. They start faster, have longer battery life, and need fewer resets. As such, they appeal to ordinary people in terms of speed, performance, and robustness.

A third sales accelerator is increasing stylishness. In the early days of smartphones, people would often say, “These smartphones look quite interesting, but they are a bit too big and bulky for my liking: frankly, they look and feel like a brick.” Over time, smartphones became smaller, lighter, and more stylish. In both their hardware and their software, they became more attractive and more desirable.

A fourth sales accelerator is increasing word of mouth recommendations. The following sets of people have all learned, from their own experience, good reasons why consumers should buy smartphones:

  • Industry analysts – who write reports that end up influencing a much wider network of people
  • Marketing professionals – who create compelling advertisements that appear on film, print, and web
  • Retail assistants – who are able to highlight attractive functionality in devices, at point of sale
  • Friends and acquaintances – who can be seen using various mobile services and applications, and who frequently sing the praises of specific devices.

This extra word of mouth exists, of course, because of a fifth sales accelerator – the increasing number of useful and/or entertaining mobile services that are available. This includes built-in services as well as downloadable add-on services. More and more individuals learn that mobile services exist which address specific problems they experience. This includes convenient mobile access to banking services, navigation, social networking, TV broadcasts, niche areas of news, corporate databases, Internet knowledgebases, tailored educational material, health diagnostics, and much, much more.

A sixth sales accelerator is increasing ecosystem maturity. The ecosystem is the interconnected network of companies, organisations, and individuals who create and improve the various mobile services and enabling technology. It takes time for this ecosystem to form and to learn how to operate effectively. However, in due course, it forms a pool of resources that is much larger than exists just within the first few companies who developed and used the underlying mobile platform. These additional resources provide, not just a greater numerical quantity of mobile software, but a greater variety of different innovative ideas. Some ecosystem members focus on providing lower cost components, others on providing components with higher quality and improved reliability, and yet others on revolutionary new functionality. Others again provide training, documentation, tools, testing, and so on.

In summary, smartphones are at the heart of a powerful virtuous cycle. Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle. Applications and services which prove their value as add-ons for one generation of smartphones become bundled into the next generation. With this extra built-in functionality, the next generation is intrinsically more attractive, and typically is cheaper too. Developers see an even larger market and increase their efforts to supply software for this market.

3.3 Factors restricting growth in the smartphone market

Decreasing price. Increasing reliability. Increasing stylishness. Increasing word of mouth recommendations. Increasingly useful mobile services. Increasing ecosystem maturity. What could stand in the way of these powerful accelerators?

Plenty.

First, there are technical problems with unexpected difficulty. Some problems turn out to be much harder than initially imagined. For example, consider speech recognition, in which a computer can understand spoken input. When Psion planned the Series 5 family of PDAs in the mid 1990s (as successors to the Series 3 family), we had a strong desire to include speech recognition capabilities in the device. Three “dictaphone style” buttons were positioned in a small unit on the outside of the case, so that the device could be used even when the case (a clamshell) was shut. Over-optimistically, we saw speech recognition as a potential great counter to the pen input mechanisms that were receiving lots of press attention at the time, on competing devices like the Apple Newton and the Palm Pilot. We spoke to a number of potential suppliers of voice recognition software, who assured us that suitably high-performing recognition was “just around the corner”. The next versions of their software, expected imminently, would impress us with its accuracy, they said. Alas, we eventually reached the conclusion that the performance was far too unreliable and would remain so for the foreseeable future – even if we went the extra mile on cost, and included the kind of expensive internal microphone that the suppliers recommended. We feared that “normal users” – the target audience for Psion PDAs – would be perplexed by the all-too-frequent inaccuracies in voice recognition. So we took the decision to remove that functionality. In retrospect, it was a good decision. Even ten years later, voice recognition functionality on smartphones generally fell short of user expectations.

Speech recognition is just one example of a deeply hard technical problem, that turned out to take much longer than expected to make real progress. Others include:

  • Avoiding smartphone batteries being drained too quickly, from all the processing that takes place on the smartphone
  • Enabling rapid search of all the content on a device, regardless of the application used to create that content
  • Devising a set of application programming interfaces which have the right balance between power-of-use and ease-of-use, and between openness and security.

Second, there are “chicken-and-egg” coordination problems – sometimes also known as “the prisoner’s dilemma”. New applications and services in a networked marketplace often depend on related changes being coordinated at several different points in the value chain. Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make less sense when viewed individually. For example, successful mobile phones required both networks and handsets. Successful smartphones required new data-enabled networks, new handsets, and new applications. And so on.

Above, I wrote about the potential for “a powerful virtuous cycle”:

Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle.

However, this only works once the various factors are all in place. A new ecosystem needs to be formed. This involves a considerable coordination problem: several different entities need to un-learn old customs, and adopt new ways of operating, appropriate to the new value chain. That can take a lot of time.

Worse – and this brings me to a third problem – many of the key players in a potential new ecosystem have conflicting business models. Perhaps the new ecosystem, once established, will operate with greater overall efficiency, delivering services to customers more reliably than before. However, wherever there are prospects of cost savings, there are companies who potentially lose out – companies who are benefiting from the present high prices. For example, network operators making healthy profits from standard voice services were (understandably) apprehensive about distractions or interference from low-profit data services running over their networks. They were also apprehensive about risks that applications running on their networks would:

  • Enable revenue bypass, with new services such as VoIP and email displacing, respectively, standard voice calls and text messaging
  • Saturate the network with spam
  • Cause unexpected usability problems on handsets, which the user would attribute to the network operator, entailing extra support costs for the operator.

The outcome of these risks of loss of revenue is that ecosystems might fail to form – or, having formed with a certain level of cooperation, might fail to attain deeper levels of cooperation. Vested interests get in the way of overall progress.

A fourth problem is platform fragmentation. The efforts of would-be innovators are spread across numerous different mobile platforms. Instead of a larger ecosystem all pulling in the same direction, the efforts are diffused, with the risk of confusing and misleading participants. Participants think they can re-apply skills and solutions from one mobile product in the context of another, but subtle and unexpected differences cause incompatibilities which can take a lot time to debug and identify. Instead of collaboration effectively turning 1+1 into 3, confusion turns 1+1 into 0.5.

A fifth problem is poor usability design. Even though a product is powerful, ordinary end users can’t work out how to operate it, or get the best experience from it. They feel alienated by it, and struggle to find their favourite functionality in amongst bewildering masses of layered menu options. A small minority of potential users, known as “technology enthusiasts”, are happy to use the product, despite these usability issues; but they are rare exceptions. As such, the product fails to “cross the chasm” (to use the language of Geoffrey Moore) to the mainstream majority of users.

The sixth problem underlies many of the previous ones: it’s the problem of accelerating complexity. Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos can ensue:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window.
  • Smartphone application development may grow in difficulty, as developers need to juggle different programming interfaces and optimisation methods.
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.

3.4 Factors that can overcome the restrictions and enable faster growth

Technical problems with unexpected difficulty. Chicken-and-egg coordination problems. Conflicting business models. Platform fragmentation. Poor usability design. Accelerating complexity. These are all factors that restrict smartphone progress. Without solving these problems, the latent potential of smartphone technology goes unfulfilled. What can be done about them?

At one level, the answer is: look at the companies who are achieving success with smartphones, despite these problems, and copy what they’re doing right. That’s a good starting point, although it risks being led astray by instances where companies have had a good portion of luck on their side, in addition to progress that they merited through their own deliberate actions. (You can’t jump from the observation that company C1 took action A and subsequently achieved market success, to the conclusion that company C2 should also take action A.) It also risks being led astray by instances where companies are temporarily experiencing significant media adulation, but only as a prelude to an unravelling of their market position. (You can’t jump from the observation that company C3 is currently a media darling, to the conclusion that a continuation of what it is currently doing will achieve ongoing product success.) With these caveats in mind, here is the advice that I offer.

The most important factor to overcome these growth restrictions is expertise – expertise in both design and implementation:

  • Expertise in envisioning and designing products that capture end-user attention and which are enjoyable to use again and again
  • Expertise in implementing an entire end-to-end product solution.

The necessary expertise (both design and implementation) spans eight broad areas:

  1. technology – such as blazing fast performance, network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security
  2. ecosystem design – to solve the “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised
  3. business models – identifying new ways in which groups of companies can profit from adopting new technology
  4. community management – encouraging diverse practitioners to see themselves as part of a larger whole, so that they are keen to contribute
  5. user experience – to ensure that the resulting products will be willingly accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts)
  6. agile project management – to avoid excess wasted investment in cases where project goals change part way through (as they inevitably do, due to the uncertain territory being navigated)
  7. lean thinking – including a bias towards practical simplicity, a profound distrust of unnecessary complexity, and a constant desire to identify and deal with bottleneck constraints
  8. system integration – the ability to pull everything together, in a way that honours the core product proposition, and which enables subsequent further evolution.

To be clear, I see these eight areas of expertise as important for all sectors of complex technology development – not just in the smartphone industry.

Expertise isn’t something that just exists in books. It manifests itself:

  • In individual people, whose knowledge spans different domains
  • In teams – where people can help and support each other, playing to everyone’s strengths
  • In tools and processes – which are the smart embodiment of previous generations of expertise, providing a good environment to work out the next generation of expertise.

In all three cases, the expertise needs to be actively nurtured and enhanced. Companies who under-estimate the extent of the expertise they need, or who try to get that expertise on the cheap – or who stifle that expertise under the constraints of mediocre management – are likely to miss out on the key opportunities provided by smartphone technology. (Just because it might appear that a company finds it easy to do various tasks, it does not follow that these tasks are intrinsically easy to carry out. True experts often make hard tasks look simple.)

But even with substantial expertise available and active, it remains essentially impossible to be sure about the timescales for major new product releases:

  • Novel technology problems can take an indeterminate amount of time to solve
  • Even if the underlying technology progresses quickly, the other factors required to create an end-to-end solution can fall foul of numerous unforeseen delays.

In case that sounds like a depressing conclusion, I’ll end this section with three brighter thoughts:

First, if predictability is particularly important for a project, you can increase your chances of your project hitting its schedule, by sticking to incremental evolutions of pre-existing solutions. That can take you a long way, even though you’ll reduce the chance of more dramatic breakthroughs.

Second, if you can afford it, you should consider running two projects in parallel – one that sticks to incremental evolution, and another that experiments with more disruptive technology. Then see how they both turn out.

Third, the relationship between “speed of technology progress” and “speed of product progress” is more complex than I’ve suggested. I’ve pointed out that the latter can lag the former, especially where there’s a shortage of expertise in fields such as ecosystem management and the creation of business models. However, sometimes the latter can move faster than the former. That occurs once the virtuous cycle is working well. In that case, the underlying technological progress might be exponential, whilst the productisation progress could become super-exponential.

3.5 Successes and shortcomings in predicting the future of technology

We all know that it’s a perilous task to predict the future of technology. The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars. These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary. Indeed, history is littered with curious and amusing examples of flawed predictions of the future. Popular science fiction fares no better:

  • The TV series “Lost in space”, which debuted in 1965, featured a manned spacecraft leaving Earth en route for a distant star, Alpha Centauri, on 16 October 1997.
  • Arthur C Clarke’s “2001: a space odyssey”, made in 1968, featured a manned spacecraft flight to Jupiter.
  • Philip K Dick’s novel “Do Androids Dream of Electric Sheep?”, coincidentally also first published in 1968, described a world set in 1992 in which androids (robots) are extremely hard to distinguish from humans. (Later editions of the novel changed the date to 2021 – the date adopted by the film Bladerunner which was based on the novel.)

Forecasts often go wrong when they spot a trend, and then extrapolate it. Projecting trends into the future is a dangerous game:

  • Skyscrapers rapidly increased in height in the early decades of the 20th century. But after the Empire State Building was completed in 1931, the rapid increases stopped.
  • Passenger aircraft rapidly increased in speed in the middle decades of the 20th century. But after Concorde, which flew its maiden flight in 1969, there have been no more increases.
  • Manned space exploration went at what might be called “rocket pace” from the jolt of Sputnik in 1957 up to the sets of footprints on the Moon in 1969-1972, but then came to an abrupt halt. At the time of writing, there are still no confirmed plans for a manned trip to Mars.

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects. Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on. Just because a technology improved in the past, it does not mean there will be sufficient societal motivation to keep on improving it in the future. Technology is not enough. Especially for changes that are complex and demanding, up to six additional criteria need to be satisfied as well:

  1. The technological development has to satisfy a strong human need.
  2. The development has to be possible at a sufficiently attractive price to individual end users.
  3. The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle.
  4. There must be a clear implementation path whereby the eventual version of the technology can be attained through a series of steps that are, individually, easier to achieve.
  5. When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be open (to accept new ideas).
  6. Likewise, the development process must be commercially attractive, or provide some other strong incentive, to encourage the generation of new ideas, and, even more important, to encourage people to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation.

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications. For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son. The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century. The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world! As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As mentioned earlier, an important factor that can allow accelerating technological progress is the establishment of an operational virtuous cycle that provides positive feedback. Here are four more examples:

  1. The first computers were designed on paper and built by hand. Later computers benefited from computer-aided design and computer-aided manufacture. Even later computers benefit from even better computer-aided design and manufacture…
  2. Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly – including more powerful tools…
  3. More powerful hardware enables new software which enables new use cases which demand more innovation in improving the hardware further…
  4. Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…

A well-functioning virtuous cycle makes it more likely that technological progress can continue. But the biggest factor determining whether a difficult piece of progress occurs is often the degree of society’s motivation towards that progress. Investment in ever-faster passenger airlines ceased, because people stopped perceiving that ever-faster airlines were that important. Manned flight to Mars was likewise deemed to be insufficiently important: that’s why it didn’t take place. The kinds of radical technological progress that I discuss in this book are, I believe, all feasible, provided sufficient public motivation is generated and displayed in support of that progress. This includes major enhancements in health, education, clean energy, artificial general intelligence, human autonomy, and human fulfilment. The powerful public motivation will cause society to prioritise developing and supporting the types of rich expertise that are needed to make this technological progress a reality.

3.6 Moore’s Law: A recap

When I started work at Psion, I was given a “green-screen” console terminal, connected to a PDP11 minicomputer running VAX VMS. That’s how I wrote my first pieces of software for Psion. A short while afterwards, we started using PCs. I remember that the first PC I used had a 20MB hard disk. I also remember being astonished to find that a colleague had a hard disk that was twice as large. What on earth does he do with all that disk space, I wondered. But before long, I had a new PC with a larger hard disk. And then, later, another new one. And so on, throughout my 20+ year career in Psion and Symbian. Each time a new PC arrived, I felt somewhat embarrassed at the apparent excess of computing power it provided – larger disk space, more RAM memory, faster CPU clock speed, etc. On leaving Symbian in October 2009, I bought a new laptop for myself, along with an external USB disk drive. That disk drive was two terabytes in size. For roughly the same amount of money (in real terms) that had purchased 20MB of disk memory in 1989, I could now buy a disk that was 100,000 times larger. That’s broadly equivalent to hard disks doubling in size every 15 months over that 20 year period.

This repeated doubling of performance, on a fairly regular schedule, is a hallmark of what is often called “Moore’s Law”, following a paper published in 1965 by Gordon Moore (subsequently one of the founders of Intel). It’s easy to find other examples of this exponential trend within the computing industry. University of London researcher Shane Legg has published a chart of the increasing power of the world’s fastest supercomputers, from 1960 to the present day, along with a plausible extension to 2020. This chart measures the “FLOPS” capability of each supercomputer – the number of floating point (maths) operations it can execute in a second. The values move all the way from kiloFLOPS through megaFLOPS, gigaFLOPS, teraFLOPS, and petaFLOPS, and point towards exaFLOPS by 2020. Over sixty years, the performance improves through twelve and a half orders of magnitude, which is more than 40 doublings. This time, the doubling period works out at around 17 months.

Radical futurist Ray Kurzweil often uses the following example:

When I was an MIT undergraduate in 1965, we all shared a computer that took up half a building and cost tens of millions of dollars. The computer in my pocket today [a smartphone] is a million times cheaper and a thousand times more powerful. That’s a billion-fold increase in the amount of computation per dollar since I was a student.

A billion-fold increase consists of 30 doublings – which, spread out over 44 years from 1965 to 2009, gives a doubling period of around 18 months. And to get the full picture of the progress, we should include one more observation alongside the million-fold price improvement and thousand-fold processing power improvement: the 2009 smartphone is about one hundred thousand times smaller than the 1965 mainframe.

These steady improvements in computer hardware, spread out over six decades so far, are remarkable, but they’re not the only example of this kind of long-term prodigious increase. Martin Cooper, who has a good claim to be considered the inventor of the mobile phone, has pointed out that the amount of information that can be transmitted over useful radio spectrum has roughly doubled every 30 months since 1897, when Guglielmo Marconi first patented the wireless telegraph:

The rate of improvement in use of the radio spectrum for personal communications has been essentially uniform for 104 years. Further, the cumulative improvement in the effectiveness of personal communications total spectrum utilization has been over a trillion times in the last 90 years, and a million times in the last 45 years

Smartphones have benefited mightily from both Moore’s Law and Cooper’s Law. Other industries can benefit in a similar way too, to the extent that their progress can be driven by semiconductor-powered information technology, rather than by older branches of technology. As I’ll review in later chapters, there are good reasons to believe that both medicine and energy are on the point of dramatic improvements along these lines. For example, the so-called Carlson curves (named after biologist Rob Carlson) track exponential decreases in the costs of both sequencing (reading) and synthesising (writing) base pairs of DNA. It cost about $10 to sequence a single base pair in 1990, but this had reduced to just 2 cents by 2003 (the date of the completion of the human genome project). That’s 9 doublings in just 13 years – making a doubling period of around 17 months.

Moore’s Law and Cooper’s Law are far from being mathematically exact. They should not be mistaken for laws of physics, akin to Newton’s Laws or Maxwell’s Laws. Instead, they are empirical observations, with lots of local deviations when progress temporarily goes either faster or slower than the overall average. Furthermore, scientists and researchers need to keep on investing lots of skill, across changing disciplines, to keep the progress occurring. The explanation given on the website of Martin Cooper’s company, ArrayComm, provides useful insight:

How was this improvement in the effectiveness of personal communication achieved? The technological approaches can be loosely categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum.

How much of the improvement can be attributed to each of these categories? Of the million times improvement in the last 45 years, roughly 25 times were the result of being able to use more spectrum, 5 times can be attributed to the ability to divide the radio spectrum into narrower slices — frequency division. Modulation techniques like FM, SSB, time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so. The remaining sixteen hundred times improvement was the result of confining the area used for individual conversations to smaller and smaller areas — what we call spectrum re-use…

Cooper suggests that his law can continue to hold until around 2050. Experts at Intel say they can foresee techniques to maintain Moore’s Law for at least another ten years – potentially longer. In assessing the wider implications of these laws, we need to consider three questions:

  1. How much technical runway is left in these laws?
  2. Can the benefits of these laws in principle be applied to transform other industries?
  3. Will wider system effects – as discussed earlier in this chapter – frustrate overall progress in these industries (despite the technical possibilities), or will they in due course even accelerate the underlying technical progress?

My answers to these questions:

  1. Plenty
  2. Definitely
  3. It depends on whether we can educate, motivate, and organise a sufficient critical mass of concerned citizens. The race is on!

>> Next chapter >>

8 April 2010

Video: The case for Artificial General Intelligence

Filed under: AGI, flight, Humanity Plus, Moore's Law, presentation, YouTube — David Wood @ 11:19 am

Here’s another short (<10 minute) video from me, building on one of the topics I’ve listed in the Humanity+ Agenda: the case for artificial general intelligence (AGI).

The discipline of having to fit a set of thoughts into a ten minute video is a good one!

Further reading: I’ve covered some of the same topics, in more depth, in previous blogposts, including:

For anyone who prefers to read the material as text, I append an approximate transcript.

My name is David Wood.  I’m going to cover some reasons for paying more attention to Artificial General Intelligence, AGI, – also known as super-human machine intelligence.  This field deserves significantly more analysis, resourcing, and funding, over the coming decade.

Machines with super-human levels of general intelligence will include hardware and software, as part of a network of connected intelligence.  Their task will be to analyse huge amounts of data, review hypotheses about this data, discern patterns, propose new hypotheses, propose experiments which will provide valuable new data, and in this way, recommend actions to solve problems or take advantage of opportunities.

If that sounds too general, I’ll have some specific examples in a moment, but the point is to create a reasoning system that is, indeed, applicable to a wide range of problems.  That’s why it’s called Artificial General Intelligence.

In this way, these machines will provide a powerful supplement to existing human reasoning.

Here are some of the deep human problems that could benefit from the assistance of enormous silicon super-brains:

  • What uses of nanotechnology can be recommended, to safely boost the creation of healthy food?
  • What are the causes of different diseases – and how can we cure them?
  • Can we predict earthquakes– and even prevent them?
  • Are there safe geo-engineering methods that will head off the threat of global warming, without nasty side effects?
  • What changes, if any, should be made to the systems of regulating the international economy, to prevent dreadful market failures?
  • Which existential risks – risks that could drastically impact human civilisation – deserve the most attention?

You get the idea.  I’m sure you could add some of your own favourite questions to this list.

Some people may say that this is an unrealistic vision.  So, in answer, let me spell out the factors I see as enabling this kind of super-intelligence within the next few decades.  First is the accelerating pace of improvements in computer hardware.

This chart is from University of London researcher Shane Legg.  On a log-axis, it shows the exponentially increasing power of super-computers, all the way from 1960 to the present day and beyond.  It shows FLOPS – the number of floating point operations per second that a computer can do.  It goes all the way from kiloflops through megaflops, gigaflops, teraflops, petaflops, and is pointing towards exaflops.  If this trend continues, we’ll soon have supercomputers with at least as much computational power as a human brain.  Perhaps within less than 20 years.

But will this trend continue?  Of course, there are often slowdowns in technological progress.  Skyscraper heights and the speeds of passenger airlines are two examples.  The slowdown can sometimes be for intrinsic technical difficulties, but is more often because of lack of sufficient customer interest or public interest in even bigger or faster products.  After all, the technical skills that took mankind to the moon in 1969 could have taken us to Mars long before now, if there had been sufficient continuing public interest.

Specifically, in the case of Moore’s Law for exponentially increasing hardware power, industry experts from companies like Intel state that they can foresee at least 10 more years’ continuation of this trend, and there have plenty of ideas for innovative techniques to extend it even further.  It comes down to two things:

  • Is there sufficient public motivation in continuing this work?
  • And can some associated system integration issues be solved?

Mention of system issues brings me back to the list of factors enabling major progress with super-intelligence.  Next is improvement with software.  There’s lots of scope here.  There’s also additional power from networking ever larger numbers of computer together.  Another factor is the ever-increasing number of people with engineering skills, around the world, who are able to contribute to this area.  We have more and more graduates in relevant topics all the time.  Provided they can work together constructively, the rate of progress should increase.  We can also learn more about the structure of intelligence by analysing biological brains at ever finer levels of detail – by scanning and model-building.  Last, but not least, we have the question of motivation.

As an example of the difference that a big surge in motivation can make, consider the example of progress with another grand, historical engineering challenge – powered flight.

This example comes from Artificial Intelligence researcher J Storr Halls in his book “Beyond AI”.  People who had ideas about powered flight were, for centuries, regarded as cranks and madmen – a bit like people who, in our present day, have ideas about superhuman machine intelligence.  Finally, after many false starts, the Wright brothers made the necessary engineering breakthroughs at the start of the last century.  But even after they first flew, the field of aircraft engineering remained a sleepy backwater for five more years, while the Wright brothers kept quiet about their work and secured patent protection.  They did some sensational public demos in 1908, in Paris and in America.  Overnight, aviation went from a screwball hobby to the rage of the age and kept that status for decades.  Huge public interest drove remarkable developments.  It will be the same with demonstrated breakthroughs with artificial general intelligence.

Indeed, the motivation for studying artificial intelligence is growing all the time.  In addition to the deep human problems I mentioned earlier, we have a range of commercially-significant motivations that will drive business interest in this area.  This includes ongoing improvements in search, language translation, intelligent user interfaces, games design, and spam detection systems – where there’s already a rapid “arms race” between writers of ever more intelligent “bots” and people who seek to detect and neutralise these bots.

AGI is also commercially important to reduce costs from support call systems, and to make robots more appealing in a wide variety of contexts.  Some people will be motivated to study AGI for more philosophical reasons, such as to research ideas about minds and consciousness, to explore the possibility of uploading human consciousness into computer systems, and for the sheer joy of creating new life forms.  Last, there’s also the powerful driver that if you think a competitor may be near to a breakthrough in this area, you’re more likely to redouble your efforts.  That adds up to a lot of motivation.

To put this on a diagram:

  • We have increasing awareness of human-level reasons for developing AGI.
  • We also have maturing sub-components for AGI, including improved algorithms, improved models of the mind, and improved hardware.
  • With the Internet and open collaboration, we have an improved support infrastructure for AGI research.
  • Then, as mentioned before, we have powerful commercial motivations.
  • Adding everything up, we should see more and more people working in this space.
  • And it should see rapid progress in the coming decade.

An increased focus on Artificial General Intelligence is part of what I’m calling the Humanity+ Agenda.  This is a set of 20 inter-linked priority areas for the next decade, spread over five themes: Health+, Education+, Technology+, Society+, and Humanity+.  Progress in the various areas should reinforce and support progress in other areas.

I’ve listed Artificial General Intelligence as part of the project to substantially improve our ability to reason and learn: Education+.  One factor that strongly feeds into AGI is improvements with ICT – including improvements in both ongoing hardware and software.  If you’re not sure what to study or which field to work in, ICT should be high on your list of fields to consider.  You can also consider the broader topic of helping to publicise information about accelerating technology – so that more and more people become aware of the associated opportunities, risks, context, and options.  To be clear, there are risks as well as opportunities in all these areas.  Artificial General Intelligence could have huge downsides as well as huge upsides, if not managed wisely.  But that’s a topic for another day.

In the meantime, I eagerly look forward to working with AGIs to help address all of the top priorities listed as part of the Humanity+ Agenda.

9 January 2010

Progress with AI

Filed under: AGI, books, m2020, Moore's Law, UKH+ — David Wood @ 9:47 am

Not everyone shares my view that AI is going to become a more and more important field during the coming decade.

I’ve received a wide mix of feedback in response to:

  • and my comments made in other discussion forums about the growth of AI.

Below, I list some of the questions people have raised – along with my answers.

Note: my answers below are informed by (among other sources) the 2007 book “Beyond AI: creating the conscience of the machine“, by J Storrs Hall, that I’ve just finished reading.

Q1: Doesn’t significant progress with AI presuppose the indefinite continuation of Moore’s Law, which is suspect?

There are three parts to my answer.

First, Moore’s Law for exponential improvements in individual hardware capability seems likely to hold for at least another five years, and there are many ideas for new semiconductor innovations that would extend the trend considerably further.  There’s a good graph of improvements in supercomputer power stretching back to 1960 on Shane Legg’s website, along with associated discussion.

Dylan McGrath, writing in EE Times in June 2009, reported views from iSuppli Corp that “Equipment cost [will] hinder Moore’s Law in 2014“:

Moore’s Law will cease to drive semiconductor manufacturing after 2014, when the high cost of chip manufacturing equipment will make it economically unfeasible to do volume production of devices with feature sizes smaller than 18nm, according to iSuppli Corp.

While further advances in shrinking process geometries can be achieved after the 20- to 18nm nodes, the rising cost of chip making equipment will relegate Moore’s Law to the laboratory and alter the fundamental economics of the semiconductor industry, iSuppli predicted.

“At those nodes, the industry will start getting to the point where semiconductor manufacturing tools are too expensive to depreciate with volume production, i.e., their costs will be so high, that the value of their lifetime productivity can never justify it,” said Len Jelinek, director and chief semiconductor manufacturing iSuppli, in a statement.

In other words, it remains technological possible that semiconductors can become exponentially denser even after 2014, but it is unclear that sufficient economic incentives will exist for these additional improvements.

As The Register reported the same story:

Basically, just because chip makers can keep adding cores, it doesn’t mean that the application software and the end user workloads that run on this iron will be able to take advantage of these cores (and their varied counts of processor threads) because of the difficulty of parallelising software.

iSuppli is not talking about these problems, at least not today. But what the analysts at the chip watcher are pondering is the cost of each successive chip-making technology and the desire of chip makers not to go broke just to prove Moore’s Law right.

“The usable limit for semiconductor process technology will be reached when chip process geometries shrink to be smaller than 20 nanometers (nm), to 18nm nodes,” explains Len Jelinek…

At that point, says Jelinek, Moore’s Law becomes academic, and chip makers are going to extend the time they keep their process technologies in the field so they can recoup their substantial investments in process research and semiconductor manufacturing equipment.

However, other analysts took a dim view of this pessimistic forecast, and maintain that Moore’s Law will be longer lived.  For example, In-Stat’s chief technology strategist, Jim McGregor, offered the following rebuttal:

…every new technology goes over some road-bumps, especially involving start-up costs, but these tend to drop rapidly once moved into regular production. “EUV [extreme ultraviolet] will likely be the next significant technology to go through this cycle,” McGregor told us.

McGregor did concede that the lifecycle of certain technologies is being extended by firms who are in some cases choosing not to migrate to every new process node, but he maintained new process tech is still the key driver of small design geometries, including memory density, logic density, power consumption, etc.

“Moore’s Law also improves the cost per device and per wafer,” added McGregor, who also noted that “the industry has and will continue to go through changes because of some of the cost issues.” These include the formation of process development alliances, like IBM’s alliances, the transition to foundry manufacturing, and design for manufacturing techniques like computational lithography.

“Many people have predicted the end of Moore’s Law and they have all been wrong,” sighed McGregor. The same apparently goes for those foolhardy enough to attempt to predict changes in the dynamics of the semiconductor industry.

“There have always been challenges to the semiconductor technology roadmap, but for every obstacle, the industry has developed a solution and that will continue as long as we are talking about the hundreds of billion of dollars in revenue that are generated every year,” he concluded.

In other words, it is likely that, given sufficient economic motivation, individual hardware performance will continue improving, at a significant rate (if, perhaps, not exponentially) throughout the coming decade.

Second, it remains an open question as to how much hardware would be needed, to host an Artificial (Machine) Intelligence (“AI”) that has either human-level or hyperhuman reasoning power.

Marvin Minsky, one of the doyens of AI research, has been quoted as believing that computers commonly available in universities and industry already have sufficient power to manifest human-level AI – if only we could work out how to program them in the right way.

J. Storr Hall provides an explanation:

Let me, somewhat presumptuously, attempt to explain Minsky’s intuition by an analogy: a bird is our natural example of the possibility of heavier-than-air flight. Birds are immensely complex: muscles, bones, feathers, nervous systems. But we can build working airplanes with tremendously fewer moving parts. Similarly, the brain can be greatly simplified, still leaving an engine capable of general conscious thought.

Personally, I’m a big fan of the view that the right algorithm can make a tremendous difference to a computational task.  As I noted in a 2008 blog post:

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Here’s a related example.  When we think of powerful chess-playing computers, we sometimes think that massive hardware resources will be required, such as a supercomputer provides.  However, as long ago as 1985, Psion, the UK-based company I used to work for (though not at that time), produced a piece of software that played what many people thought, at the time (and subsequently) to be a very impressive quality of chess.  See here for some discussion and some reviews.  Taking things even further, this article from 1983 describes an implementation of chess, for the Sinclair ZX-81, in only 672 bytes – which is hard to believe!  (Thanks to Mark Jacobs for this link.)

Third, building on this point, progress in AI can be described as a combination of multiple factors:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

Even if the pace slows for improvements in the hardware of individual computers, it’s still very feasible for improvements in AI to take place, on account of the other factors.

Q2: Hasn’t rapid progress with AI often been foretold before, but with disappointing outcomes each time?

It’s true that some of the initial forecasts of the early AI research community, from the 1950’s, have turned out to be significantly over-optimistic.

For example, in his famous 1950 paper “Computing machinery and intelligence” – which set out the idea of the test later known as the “Turing test” – Alan Turing made the following prediction:

I believe that in about fifty years’ time it will be possible, to programme computers… to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification [between a computer answering, or a human answering] after five minutes of questioning.

Since the publication of that paper, some sixty years have now passed, and computers are still far from being able to consistently provide an interface comparable (in richness, subtlety, and common sense) to that of a human.

For a markedly more optimistic prediction, consider the proposal for the 1956 Dartmouth Summer Research Conference on Artificial Intelligence which is now seen, in retrospect, as the the seminal event for AI as a field.  Attendees at the conference included Marvin Minsky, John McCarthy, Ray Solomonoff, and Claude Shannon.  The group came together with the following vision:

We propose that a 2 month, 10 man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

The question for us today is: what reason is there to expect rapid progress with AI in (say) the next ten years, given that similar expectations in the past failed – and, indeed, the whole field eventually fell into what is known as an “AI winter“?

J Storrs Hall has some good answers to this question.  They include the following:

First, AI researchers in the 1950’s and 60’s laboured under a grossly over-simplified view of the complexity of the human mind.  This can be seen, for example, from another quote from Turing’s 1950 paper:

Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child’s? If this were then subjected to an appropriate course of education one would obtain the adult brain. Presumably the child brain is something like a notebook as one buys it from the stationer’s. Rather little mechanism, and lots of blank sheets. (Mechanism and writing are from our point of view almost synonymous.) Our hope is that there is so little mechanism in the child brain that something like it can be easily programmed.

Progress in brain sciences in the intervening years has highlighted very significant innate structure in the child brain.  A child brain is far from being a blank notebook.

Second, early researchers were swept along on a wave of optimism from some apparent early successes.  For example, consider the “ELIZA” application that mimicked the responses of a certain school of psychotherapist, by following a series of simple pattern-matching rules.  Lay people who interacted with this program frequently reported positive experiences, and assumed that the computer really was understanding their issues.  Although the AI researchers knew better, at least some of them may have believed that this effect showed that more significant results were just around the corner.

Third, the willingness of funding authorities to continue supporting general AI research became stretched, due to the delays in producing stronger results, and due to other options for how that research funds should be allocated.  For example, the Lighthill Report (produced in the UK in 1973 by Professor James Lighthill – whose lectures in Applied Mathematics at Cambridge I enjoyed many years later) gave a damning assessment:

The report criticized the utter failure of AI to achieve its “grandiose objectives.” It concluded that nothing being done in AI couldn’t be done in other sciences. It specifically mentioned the problem of “combinatorial explosion” or “intractability”, which implied that many of AI’s most successful algorithms would grind to a halt on real world problems and were only suitable for solving “toy” versions…

The report led to the dismantling of AI research in Britain. AI research continued in only a few top universities (Edinburgh, Essex and Sussex). This “created a bow-wave effect that led to funding cuts across Europe”

There were similar changes in funding climate in the US, with changes of opinion within DARPA.

Shortly afterwards, the growth of the PC and general IT market provided attractive alternative career targets for many of the bright researchers who might previously have considered devoting themselves to AI research.

To summarise, the field suffered an understandable backlash against its over-inflated early optimism and exaggerated hype.

Nevertheless, there are grounds for believing that considerable progress has taken place over the years.  The middle chapters of the book by J Storrs Hall provides the evidence.  The Wikipedia article on “AI winter” covers (much more briefly) some of the same material:

In the late ’90s and early 21st century, AI technology became widely used as elements of larger systems, but the field is rarely credited for these successes. Nick Bostrom explains “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labeled AI anymore.” Rodney Brooks adds “there’s this stupid myth out there that AI has failed, but AI is around you every second of the day.”

Technologies developed by AI researchers have achieved commercial success in a number of domains, such as machine translation, data mining, industrial robotics, logistics, speech recognition, banking software, medical diagnosis and Google’s search engine…

Many of these domains represent aspects of “narrow” AI rather than “General” AI (sometime called “AGI”).  However, they can all contribute to overall progress, with results in one field being available for use and recombination in other fields.  That’s an example of point 5 in my previous list of the different factors affecting progress in AI:

  1. Individual hardware power
  2. Compound hardware power (when many different computers are linked together, as on a network)
  3. Software algorithms
  4. Number of developers and researchers who are applying themselves to the problem
  5. The ability to take advantage of previous results (“to stand on the shoulders of giants”).

On that note, let’s turn to the fourth factor in that list.

Q3: Isn’t AI now seen as a relatively uninteresting field, with few incentives for people to enter it?

The question is: what’s going to cause bright researchers to devote sufficient time and energy to progressing AI – given that there are so many other interesting and rewarding fields of study?

Part of the answer is to point out that the potential number of people working in this field is, today, larger than ever before – simply due to the rapid increase in the number of IT-literate graduates around the world.  Globally, there are greater numbers of science and engineering graduates from universities (including China and India) than ever before.

Second, here are some particular pressing challenges and commercial opportunities, which make it likely that further research will actually take place on AI:

  • The “arms race” between spam detection systems (the parts of forms that essentially say, “prove you are a human, not a bot”) and ever-cleverer spam detection evasive systems;
  • The need for games to provide ever more realistic “AI” features for the virtual characters in these games (games players and games writers unabashedly talk about the “AI” elements in these games);
  • The opportunity for social networking sites to provide increasingly realistic virtual companions for users to interact with (including immersive social networking sites like “Second Life”);
  • The constant need to improve the user experience of interacting with complex software; arguably the complex UI is the single biggest problem area, today, facing many mobile applications;
  • The constant need to improve the interface to large search databases, so that users can more quickly find material.

Since there is big money to be made from progressing solutions in each of these areas, we can assume that companies will be making some significant investments in the associated technology.

There’s also the prospect of a “tipping point” once some initial results demonstrate the breakthrough nature of some aspects of this field.  As J Storrs Hall puts it (in the “When” chapter of his book):

Once a baby [artificial] brain does advance far enough that it has clearly surpassed the bootstrap fallacy point… it might affect AI like the Wright brothers’ [1908] Paris demonstrations of their flying machines did a century ago.  After ignoring their successful first flight for years, the scientific community finally acknowleged it.  Aviation went from a screwball hobby to the rage of the age and kept that cachet for decades.  In particular, the amount of development took off enormously.  If we can expect a faint echo of that from AI, the early, primitive general learning systems will focus research considerably and will attract a lot of new resources.

Not only are there greater numbers of people potentially working on AI now, than ever before; they each have much more powerful hardware resources available to them.  Experiments with novel algorithms that previously would have tied up expensive and scarce supercomputers can nowadays be done on inexpensive hardware that is widely available.  (And once interesting results are demonstrated on low-powered hardware, there will be increased priority of access for variants of these same ideas to be run on today’s supercomputers.)

What’s more, the feedback mechanisms of general internet connectivity (sharing of results and ideas) and open source computing (sharing of algorithms and other source code) mean that each such researcher can draw upon greater resources than before, and participate in stronger collaborative projects.  For example, people can choose to participate in the “OpenCog” open source AI project.]

Appendix: Further comments on the book “Beyond AI”

As well as making a case that progress in AI has been significant, another of the main theme of J Storrs Hall’s book “Beyond AI: Creating the conscience of the machine” is the question of whether hyperhuman AIs would be more moral than humans as well as more intelligent.

The conclusion of his argument is, yes, these new brains will probably have a higher quality of ethical behaviour than humans have generally exhibited.  The final third of his book covers that topic, in a generally convincing way: he has a compelling analysis of topics such as free-will, self-awareness, conscious introspection, and the role of ethical frameworks to avoid destructive aspects of free-riders.  However, critically, it all depends on how these great brains are set up with regard to core purpose, and there are no easy answers.

Roko Mijic will be addressing this same topic in the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?” that it being held on Saturday 23rd January.  (If you use Facebook, you can RSVP here to indicate whether you’re coming.  NB it’s entirely optional to RSVP.)

26 October 2008

The Singularity will go mainstream

Filed under: AGI, brain simulation, cryonics, Moore's Law, robots, Singularity — David Wood @ 1:49 pm

The concept of the coming technological singularity is going to enter mainstream discourse, and won’t go away. It will stop being something that can be dismissed as freaky or outlandish – something that is of interest only to marginal types and radical thinkers. Instead, it’s going to become something that every serious discussion of the future is going to have to contemplate. Writing a long-term business plan – or a long-term political agenda – without covering the singularity as one of the key topics, is increasingly going to become a sign of incompetence. We can imagine the responses, just a few years from now: “Your plan lacks a section on how the onset of the singularity is going to affect the take-up of your product. So I can’t take this proposal seriously”. And: “You’ve analysed five trends that will impact the future of our company, but you haven’t included the singularity – so everything else you say is suspect.”

In short, that’s the main realisation I reached by attending the Singularity Summit 2008 yesterday, in the Montgomery Theater in San Jose. As the day progressed, the evidence mounted up that the arguments in favour of the singularity will be increasingly persuasive, to wider and wider groups of people. Whether or not the singularity will actually happen is a slightly different question, but it’s no longer going to be possible to dismiss the concept of the singularity as irrelevant or implausible.

To back up my assertion, here are some of the highlights of what was a very full day:

Intel’s CTO and Corporate VP Justin Rattner spoke about “Countdown to Singularity: accelerating the pace of technological innovation at Intel”. He described a series of technological breakthroughs that would be likely to keep Moore’s Law operational until at least 2020, and he listed ideas for how it could be extended even beyond that. Rattner clearly has a deep understanding of the technology of semiconductors.

Dharmendra Modha, the manager of IBM’s cognitive computing lab at Almaden, explained how his lab had already utilised IBM super-computers to simulate an entire rat brain, with the simulation running at one tenth of real-time speed. He explained his reasons for expecting that his lab should be emable to simular an entire human brain, running at full speed, by 2018. This was possible as a result of the confluence of “three hard disruptive trends”:

  1. Neuroscience has matured
  2. Supercomputing meets the brain
  3. Nanotechnology meets the brain.

Cynthia Breazeal, Associate Professor of Media Arts and Sciences, MIT, drew spontaneous applause from the audience part-way through her talk, by showing a video of one of her socially responsive robots, Leonardo. The video showed Leonardo acting on beliefs about what various humans themselves believed (including beliefs that Leonardo could deduce were false). As Breazeal explained:

  • Up till recently, robotics has been about robots interacting with things (such as helping to manufacture cars)
  • In her work, robotics is about robots interacting with people in order to do things. Because humans are profoundly social, these robots will also have to be profoundly social – they are being designed to relate to humans in psychological terms. Hence the expressions of emotion on Leonardo’s face (and the other body language).

Marshall Brain, founder of “How Stuff Works”, also spoke about robots, and the trend for them to take over work tasks previously done by humans: MacDonalds waitresses, Wal-Mart shop assistants, vehicle drivers, construction workers, teachers…

James Miller, Associate Professor of Economics, Smith College, explicitly addressed the topic of how increasing belief in the likelihood of an oncoming singularity would change people’s investment decisions. Once people realise that, within (say) 20-30 years, the world could be transformed into something akin to paradise, with much greater lifespans and with abundant opportunities for extremely rich experiences, many will take much greater care than before to seek to live to reach that event. Interest in cryonics is likely to boom – since people can reason their bodies will only need to be vitrified for a short period of time, rather than having to trust their descendants to look after them for unknown hundreds of years. People will shun dangerous activities. They’ll also avoid locking money into long-term investments. And they’ll abstain from lengthy training courses (for example, to master a foreign language) if they believe that technology will shortly render as irrelevant all the sweat of that arduous learning.

Not every speaker was optimistic. Well-known author and science journalist John Horgan gave examples of where the progress of science and technology has been, not exponential, but flat:

  • nuclear fusion
  • ending infectious diseases
  • Richard Nixon’s “war on cancer”
  • gene therapy treatments
  • treating mental illness.

Horgan chided advocates of the singularity for their use of “rhetoric that is more appropriate to religion than science” – thereby risking damaging the standing of science at a time when science needs as much public support as it can get.

Ray Kurzweil, author of “The Singularity is Near”, responded to this by agreeing that not every technology progresses exponentially. However, those that become information sciences do experience significant growth. As medicine and health increasingly become digital information sciences, they are experiencing the same effect. Although in the past I’ve thought that Kurzweil sometimes overstates his case, on this occasion I thought he spoke with clarity and restraint, and with good evidence to back up his claims. He also presented updated versions of the graphs from his book. In the book, these graphs tended to stop around 2002. The slides Kurzweil showed at the summit continued up to 2007. It does appear that the rate of progress with information sciences is continuing to accelerate.

Earlier in the day, science fiction author and former maths and computing science professor Vernor Vinge gave his own explanation for this continuing progress:

Around the world, in many fields of industry, there are hundreds of thousands of people who are bringing the singularity closer, through the improvements they’re bringing about in their own fields of research – such as enhanced human-computer interfaces. They mainly don’t realise they are advancing the singularity – they’re not working to an agreed overriding vision for their work. Instead, they’re doing what they’re doing because of the enormous incremental economic plus of their work.

Under questioning by CNBC editor and reporter Bob Pisani, Vinge said that he sticks with the forecast he made many years ago, that the singularity would (“barring major human disasters”) happen by 2030. Vinge also noted that rapidly improving technology made the future very hard to predict with any certainty. “Classic trendline analysis is seriously doomed.” Planning should therefore focus on scenario evaluation rather than trend lines. Perhaps unsurprisingly, Vinge suggested that more forecasters should read science fiction, where scenarios can be developed and explored. (Since I’m midway through reading and enjoying Vinge’s own most recent novel, “Rainbows End” – set in 2025 – I agree!)

Director of Research at the Singularity Institute, Ben Goertzel, described a staircase of potential applications for the “OpenCog” system of “Artificial General Intelligence” he has been developing with co-workers (partially funded by Google, via the Google Summer of Code):

  • Teaching virtual dogs to dance
  • Teaching virtual parrots to talk
  • Nurturing virtual babies
  • Training virtual scientists that can read vast swathes of academic papers on your behalf
  • And more…

Founder and CSO of Innerspace Foundation, Pete Estep, gave perhaps one of the most thought-provoking presentations. The goal of Innerspace is, in short, to improve brain functioning. In more detail, “To establish bi-directional communication between the mind and external storage devices.” Quoting from the FAQ on the Innerspace site:

The IF [Innerspace Foundation] is dedicated to the improvement of human mind and memory. Even when the brain operates at peak performance learning is slow and arduous, and memory is limited and faulty. Unfortunately, other of the brain’s important functions are similarly challenged in our complex modern world. As we age, these already limited abilities and faculties erode and fail. The IF supports and accelerates basic and applied research and development for improvements in these areas. The long-term goal of the foundation is to establish relatively seamless two-way communication between people and external devices possessing clear data storage and computational advantages over the human brain.

Estep explained that he was a singularity agnostic: “it’s beyond my intellectual powers to decide if a singularity within 20 years is feasible”. However, he emphasised that it is evident to him that “the singularity might be near”. And this changes everything. Throughout history, and extending round the world even today, “there have been too many baseless fantasies and unreasonable rationalisations about the desirability of death”. The probable imminence of the singularity will help people to “escape” from these mind-binds – and to take a more vigorous and proactive stance towards planning and actually building desirable new technology. The singularity that Estep desires is one, not of super-powerful machine intelligence, but one of “AI+BCI: AI combined with a brain-computer interface”. This echoed words from robotics pioneer Hans Moravec that Vernor Vinge had reported earlier in the day:

“It’s not a singularity if you are riding the curve. And I intend to ride the curve.”

On the question of how to proactively improve the chances for beneficial technological development, Peter Diamandis spoke outstandingly well. He’s the founder of the X-Prize Foundation. I confess I hadn’t previously realised anything like the scale and the accomplishment of this Foundation. It was an eye-opener – as, indeed, was the whole day.

30 August 2008

Anticipating the singularity

Filed under: Moore's Law, Singularity — David Wood @ 10:05 am

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

The first time I read these words, a chill went down my spine. They were written in 1965 by IJ Good, a British statistician who had studied mathematics at Cambridge University pre-war, worked with Alan Turing and others in the highly secret code-breaking labs at Bletchley Park, and was involved in the creation of the Colossus computer (“the world’s first programmable, digital, electronic, computing device“).

The point where computers become better than humans at generating new computers – or (not quite the same thing) the point where AI becomes better than humans at generating new AI – is nowadays often called the singularity (or, sometimes, “the Technological Singularity“). To my mind, it’s a hugely important topic.

The name “Singularity” was proposed by maths professor and science fiction author Vernor Vinge, writing in 1993:

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended…

“When greater-than-human intelligence drives progress, that progress will be much more rapid. In fact, there seems no reason why progress itself would not involve the creation of still more intelligent entities — on a still-shorter time scale…

“From the human point of view this change will be a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control…

“I think it’s fair to call this event a singularity (“the Singularity” for the purposes of this paper). It is a point where our old models must be discarded and a new reality rules. As we move closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown…”

If Vinge’s prediction is confirmed, the Singularity will happen within 30 years of 1993, namely by 2023. (He actually says, in his paper, “I’ll be surprised if this event occurs before 2005 or after 2030”.)

Of course, it’s notoriously hard to predict timescales for future technology. Some things turn out to take a lot longer than expected. AI is a prime example. Progress with AI has frequently turned out to be disappointing.

But not all technology predictions turn out bad. The best technology prediction of all time is probably that by Intel co-founder Gordon Moore. Coincidentally writing in 1965 (like IJ Good mentioned above), Moore noted:

“The complexity for minimum component costs has increased at a rate of roughly a factor of two per year… Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least 10 years. That means by 1975, the number of components per integrated circuit for minimum cost will be 65,000. I believe that such a large circuit can be built on a single wafer…”

For more than forty years, Moore’s Law has held roughly true – with (as revised by Moore himself) the doubling period taking around 24 months instead of 12 months. And it is this persistent growth in computing power that leads other writers – most famously, Ray Kurzweil – to continue to predict the reasonably imminent onset of the singularity. In his 2005 book “The Singularity Is Near: When Humans Transcend Biology“, Kurzweil picks the date 2045.

Intel’s present-day CTO, Justin Rattner, reviewed some of Kurzweil’s ideas in his keynote on the future of technology at the Intel Developer Forum in San Francisco on the 21st of August. The presentation was called “Crossing the chasm between humans and machines”.

To check what Justin said, you can view the official Intel video available here. There’s also a brief slide-by-slide commentary at the Singularity Hub site, as well as lots of other web coverage (eg here and here). Justin said that the singularity “might be only a few decades away”, and his talk includes examples of the technological breakthroughs that will plausibly be involved in this grander breakthrough.

Arguably the biggest unknown in the technology involved in superhuman intelligence is software. Merely improving the hardware doesn’t necessarily mean the the software performance increases to match. As has been remarked, “software gets slower, more rapidly than hardware gets faster”. (This is sometimes called “Wirth’s Law”.) If your algorithms scale badly, fixing the hardware will just delay the point where your algorithms fail.

So it’s not just the hardware that matters – it’s how that hardware is organised. After all, the brains of Neanderthals were larger than those of humans, but are thought to have been wired up differently to ours. Brain size itself doesn’t necessarily imply intelligence.

But just because software is an unknown, it doesn’t mean that hardware-driven predictions of the onset of the singularity are bound to be over-optimistic. It’s also possible they could be over-pessimistic. It’s even possible that, with the right breakthroughs in software, superhuman intelligence could be supported by present-day hardware. AI researcher Eliezer Yudkowsky of the Singularity Institute reports the result of an interesting calculation made by Geordie Rose, the CTO of D-Wave Systems, concerning software versus hardware progress:

“Suppose you want to factor a 75-digit number. Would you rather have a 2007 supercomputer, IBM’s Blue Gene/L, running an algorithm from 1977, or an 1977 computer, the Apple II, running a 2007 algorithm? Geordie Rose calculated that Blue Gene/L with 1977’s algorithm would take ten years, and an Apple II with 2007’s algorithm would take three years…

“[For exploring new AI breakthroughs] I will say that on anything except a very easy AI problem, I would much rather have modern theory and an Apple II than a 1970’s theory and a Blue Gene.”

Another researcher who puts more emphasis on the potential breakthrough capabilities of the right kind of software, rather than hardware, is Ben Goertzel. Two years ago, he gave a talk entitled “Ten years to the Singularity if we really try.” One year ago, he gave an updated version, “Nine years to the Singularity if we really really try“. Ben suggests that the best place for new AIs to be developed is inside virtual worlds (such as Second Life). He might be right. It wouldn’t be the first time that significant software breakthroughs happened in arenas that mainstream society regards as peripheral or even objectionable.

Even bigger than the question of the plausible timescale of a future technological singularity, is the question of whether we can influence the outcome, to be positive for humanity rather than a disaster. That will be a key topic of the Singularity Summit 2008, which will be held in San Jose on the last Saturday of October.

The speakers at the summit include five of the people I’ve mentioned above:

(And there are 16 other named speakers – including many that I view as truly fascinating thinkers.)

The publicity material for the Singularity Summit 2008 describes the event as follows:

“The Singularity Summit gathers the smartest people around to explore the biggest ideas of our time. Learn where humanity is headed, meet the people leading the way, and leave inspired to create a better world.”

That’s a big claim, but it might just be right.

6 August 2008

Two fallacies on the value of software

Filed under: free software, Highlife entertainment, Moore's Law — David Wood @ 8:55 am

Software is everywhere. Unfortunately, buggy software is everywhere too.

I’m writing this en route to a family holiday in South America – four countries in 15 days. The holiday starts with a BA flight across the Atlantic. At first sight, the onboard “highlife” entertainment system is impressive. My son asks: do they really have all these music CDs and movies available? “Moore’s Law in action” was my complacent reply.

The first sign of trouble was when the flight attendant welcome announcement, along with the usual stuff about “if you sleep, please ensure your fastened seat belt is visible on top of your blanket“, contained a lengthy dire warning that no one should try to interact with the video screens in any way, while the system was going through its lengthy startup activity. Otherwise the system would be prone to freeze or some other malfunction.

It seems the warning was in vain. From my vantage point in the very back row of seats on the plane, as the flight progressed I could see lots of passengers calling over the flight attendants to point out problems with their individual systems. Films weren’t available, touchscreen interactions were random, etc. The attendants tried resetting individual screens, but then announced that, because so many screens were experiencing problems, the whole system would be restarted. And, by the way, it would take 30 minutes to reboot. All passengers would need to keep their hands off the screen throughout that period of time, even through many tempting buttons advertising features of the entertainment system would be displayed on the screen during that time.

One flight attendant forlornly tried to explain the situation to me: “it’s like when you’re starting up a computer, you have to wait until it’s completely ready before you can start using it”.Well, no. If software draws a button on the screen, it ought to cope with a user doing what comes naturally and pressing that button. That’s one of the very first rules of GUI architecture. In any case, what on earth is the entire system doing, taking 30 minutes to reboot?

To be fair, BA’s inflight entertainment system is hardly alone in having this kind of defect. I’ve often seen various bizarre technobollocks messages scrolling on screens on the back of aeroplane seats. I also remember a Lufthansa flight in which the software controlling the reclining chairs (I was flying business class on that occasion) was clearly faulty – it would freeze, and all subsequent attempts to adjust the chair position would be ignored. The flight attendants that day let me into the secret that holding down three of the buttons simultaneously for a couple of seconds would forcibly reboot the system. It was a useful piece of knowledge!

And to be fair, when the system does work, it’s great to have in-flight access to so much entertainment and information.

But I draw the following conclusion: Moore’s Law is not enough. Moore’s Law enables enormous amounts of data – and enormous amounts of software – to be stored on increasingly inexpensive storage mediums. But you need deep and wide-ranging skills in software creation if the resulting compex systems will actually meet the expectations of reasonable end users. Software development, when done right, is going to remain as high value add for the foreseeable future.

“Moore’s Law” is enough is the first fallacy on the value of software. Hot on its heels comes a second idea, equally fallacious:

The value of software is declining towards zero.

This second fallacy is wrapped up with a couple of ideas:

  1. The apparent belief of some people that all software ought to be sold free-of-charge
  2. The observation that the price of a fixed piece of software does tend to decline over time.

However, the second observation misses the important fact that the total amount of software is itself rapidly increasing – both in terms of bulk, and in terms of functionality and performance. Multiply one function which is slowly declining (the average price of a fixed piece of software) with another one that is booming (the total amount of all software) and you get an answer that refutes the claim that the value of software itself is declining towards zero.

Yes, it’s reasonable to expect that individual pieces of software (especially those that have stopped evolving, or which are evolving slowly) will tend to become sold for free. But as new software is made available, and as software keeps on being improved, there’s huge scope for value to be made, and for a portion of that value to be retained by first-rate developers.

Footnote: Even after the BA entertainment system restarted, there were still plenty of problems. Fast-forwarding through a film to try to get to the previous location was a very hit-and-miss affair: there was far too much latency in the system. The team responsible for this system should be hanging their heads in shame. But, alas, they’re in plenty of company.

Blog at WordPress.com.