dw2

31 December 2010

Welcome 2011 – what will the future hold?

Filed under: aging, futurist, Humanity Plus, intelligence, rejuveneering — David Wood @ 6:42 pm

As 2010 turns into 2011, let me offer some predictions about topics that will increasingly be on people’s minds, as 2011 advances.

(Spoiler: these are all topics that will feature as speaker presentations at the Humanity+ UK 2011 conference that I’m organising in London’s Conway Hall on 29th January.  At time of writing, I’m still waiting to confirm possibly one or two more speakers for this event, but registration is already open.)

Apologies for omitting many other key emerging tech-related trends from this list.  If there’s something you care strongly about – and if you live within striking distance of London – you’ll be more than welcome to join the discussion on 29th January!

19 May 2010

Chapter finished: A journey with technology

Five more days have passed, and I’ve completed another chapter draft (see snapshot below) of my proposed new book.

This takes me up to 30% of what I hope to write:

  • I’ve drafted three out of ten planned chapters.
  • The wordcount has reached 15,000, out of a planned total of 50,000.

After this, I plan to dig more deeply into specific technology areas.  I’ll be moving further out of my comfort area.  First will be “Health”.  Fortuitously, I spent today at an openMIC meeting in Bath, entitled “i-Med: Serious apps for mobile healthcare”.  That provided me with some useful revision!

========

3. A journey with technology

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

Here’s the key question I want to start answering in this chapter: how quickly can technology progress in the next few decades?

This is far from being an academic question. At heart, I want to know whether it’s feasible for that progress to be quick enough to provide technological solutions to the calamitous issues and huge opportunities described in the first chapter of this book. The progress must be quick enough, not only for core technological research, but also for productisation of that technology into the hands of billions of consumers worldwide.

For most of this book, I’ll be writing about technologies from an external perspective. I have limited direct experience with, for example, the healthcare industry and the energy industry. What I have to say about these topics will be as, I hope, an intelligent outside observer. But in this chapter, I’m able to adopt an internal perspective, since the primary subject matter is the industry where I worked for more than twenty years: the smartphone industry.

In June 1988, I started work in London at Psion PLC, the UK-based manufacturer of electronic organisers. I joined a small team working on the software for a new generation of mobile computers. In the years that followed, I spent countless long days, long nights and (often) long weekends architecting, planning, writing, integrating, debugging and testing Psion’s software platforms. In due course, Psion’s software would power more than a million PDAs in the “Series 3” family of devices. However, the term “PDA” was unknown in 1988; likewise for phrases like “smartphone”, “palmtop computer”, and “mobile communicator”. The acronym “PDA”, meaning “personal digital assistant”, was coined by Apple in 1992 in connection with their ambitious but flawed “Newton” project – long before anyone conceived of the name “iPhone”.

I first became familiar with the term “smartphone” in 1996, during early discussions with companies interested in using Psion’s “EPOC32” software system in non-PDA devices. After a faltering start, these discussions gathered pace. In June 1998, ten years after I had joined Psion, a group of Psion senior managers took part in the announcement of the formation of a new entity, Symbian Ltd, which had financial backing from the three main mobile phone manufacturers of the era – Ericsson, Motorola, and Nokia. Symbian would focus on the software needs of smartphones. The initial software, along with 150 employees led by a 5 man executive team, was contributed by Psion. In the years that followed, I held Symbian executive responsibility, at different times, for Technical Consulting, Partnering, and Research. In due course, sales of devices based on Symbian OS exceeded 250 million devices.

In June 2008 – ten more years later, to the day – another sweeping announcement was made. The source code of Symbian OS, along with that of the S60 UI framework and applications from Nokia, would become open source, and would be overseen by a new independent entity, the Symbian Foundation.

My views on the possibilities for radical improvements in technology as a whole are inevitably coloured by my helter-skelter experiences with Psion and Symbian. During these 20+ years of intense projects following close on each others’ heels, I saw at first hand, not only many issues with developing and productising technology, but also many issues in forecasting the development and productisation of technology.

For example, the initial June 1998 business plans for Symbian are noteworthy both for what we got right, and for what we got wrong.

3.1 Successes and shortcomings in predicting the future of smartphones

In June 1998, along with my colleagues on the founding team at Symbian, I strove to foresee how the market for smartphones would unfold in the years ahead. This forecast was important, as it would:

  • Guide our own investment decisions
  • Influence the investment decisions of our partner companies
  • Set the context for decisions by potential employees whether or not to join Symbian (and whether or not to remain with Symbian, once they had joined).

Many parts of our vision turned out correct:

  • There were big growths in interest in computers with increased mobility, and in mobile phones with increased computing capability.
  • Sales of Symbian-powered mobile devices would, by the end of the first decade of the next century, be measured in 100s of millions.
  • Our phrase, “Smartphones for all”, which initially struck many observers as ridiculous, became commonplace: interest in smartphones stopped being the preserve of a technologically sophisticated minority, and became a mainstream phenomenon.
  • Companies in numerous industries realised that they needed strong mobile offerings, to retain their relevance.
  • Rather than every company developing its own smartphone platform, there were big advantages for companies to collaborate in creating shared standard platforms.
  • The attraction of smartphones grew, depending on the availability of add-on applications that delivered functionality tailored to the needs of individual users.

Over the next decade, a range of new features became increasingly widespread on mobile phones, despite early scepticism:

  • Colour screens
  • Cameras – and video recorders
  • Messaging: SMS, simple email, rich email…
  • Web browsing: Google, Wikipedia, News…
  • Social networking: Facebook, Twitter, blogs…
  • Games – including multiplayer games
  • Maps and location-based services
  • Buying and selling (tickets, vouchers, cash).

By 2010, extraordinarily powerful mobile devices are in widespread use in almost every corner of the planet. An average bystander transported from 1998 to 2010 might well be astonished at the apparently near-magical capabilities of these ubiquitous devices.

On the other hand, many parts of our 1998 vision proved wrong.

First, we failed to foresee many of the companies that would be the most prominent in the smartphone industry by the end of the next decade. In 1998:

  • Apple seemed to be on a declining trajectory.
  • Google consisted of just a few people working in a garage. (Like Symbian, Google was founded in 1998.)
  • Samsung and LG were known to the Symbian team, but we decided not to include them on our initial list of priority sales targets, in view of their lowly sales figures.

Second, although our predictions of eventual sales figures for Symbian devices were broadly correct – namely 100s of millions – this was the result of two separate mistakes cancelling each other out:

  • We expected to have a higher share of the overall mobile phone market (over 50% – perhaps even approaching 100%).
  • We expected that overall phone market to remain at the level of 100s of millions per annum – we did not imagine it would become as large as a billion per year.

(A smaller-than-expected proportion of a larger-than-expected market worked out at around the same volume of sales.)

Third – and probably most significant for drawing wider lessons – we got the timescales significantly wrong. It took considerably longer than we expected for:

  • The first successful smartphones to become available
  • Next generation networks (supporting high-speed mobile data) to be widely deployed
  • Mobile applications to become widespread.

Associated with this, many pre-existing systems remained in place much longer than anticipated, despite our predictions that they would fail to be able to adapt to changing market demands:

  • RIM sold more and more BlackBerries, despite repeated concerns that their in-house software system would become antiquated.
  • The in-house software systems of major phone manufacturers, such as Nokia’s Series 40, likewise survived long past predicted “expiry” dates.

To examine what’s going on, it’s useful to look in more detail at three groups of factors:

  1. Factors accelerating growth in the smartphone market
  2. Factors restricting growth in the smartphone market
  3. Factors that can overcome the restrictions and enable faster growth.

Having reviewed these factors in the case of smartphone technology, I’ll then revisit the three groups of factors, with an eye to general technology.

3.2 Factors accelerating growth in the smartphone market

The first smartphone sales accelerator is decreasing price. Smartphones increase in popularity because of price reductions. As the devices become less expensive, more and more people can afford them. Other things being equal, a desirable piece of consumer electronics that has a lower cost will sell more.

The underlying cost of smartphones has been coming down for several reasons. Improvements in underlying silicon technology mean that manufacturers can pack more semiconductors on to the same bit of space for the same cost, creating more memory and more processing power. There are also various industry scale effects. Companies who work with a mobile platform over a period of time gain the benefit of “practice makes perfect”, learning how to manage the supply chain, select lower price components, and assemble and manufacture their devices at ever lower cost.

A second sales accelerator is increasing reliability. With some exceptions (that have tended to fall by the wayside), smartphones have become more and more reliable. They start faster, have longer battery life, and need fewer resets. As such, they appeal to ordinary people in terms of speed, performance, and robustness.

A third sales accelerator is increasing stylishness. In the early days of smartphones, people would often say, “These smartphones look quite interesting, but they are a bit too big and bulky for my liking: frankly, they look and feel like a brick.” Over time, smartphones became smaller, lighter, and more stylish. In both their hardware and their software, they became more attractive and more desirable.

A fourth sales accelerator is increasing word of mouth recommendations. The following sets of people have all learned, from their own experience, good reasons why consumers should buy smartphones:

  • Industry analysts – who write reports that end up influencing a much wider network of people
  • Marketing professionals – who create compelling advertisements that appear on film, print, and web
  • Retail assistants – who are able to highlight attractive functionality in devices, at point of sale
  • Friends and acquaintances – who can be seen using various mobile services and applications, and who frequently sing the praises of specific devices.

This extra word of mouth exists, of course, because of a fifth sales accelerator – the increasing number of useful and/or entertaining mobile services that are available. This includes built-in services as well as downloadable add-on services. More and more individuals learn that mobile services exist which address specific problems they experience. This includes convenient mobile access to banking services, navigation, social networking, TV broadcasts, niche areas of news, corporate databases, Internet knowledgebases, tailored educational material, health diagnostics, and much, much more.

A sixth sales accelerator is increasing ecosystem maturity. The ecosystem is the interconnected network of companies, organisations, and individuals who create and improve the various mobile services and enabling technology. It takes time for this ecosystem to form and to learn how to operate effectively. However, in due course, it forms a pool of resources that is much larger than exists just within the first few companies who developed and used the underlying mobile platform. These additional resources provide, not just a greater numerical quantity of mobile software, but a greater variety of different innovative ideas. Some ecosystem members focus on providing lower cost components, others on providing components with higher quality and improved reliability, and yet others on revolutionary new functionality. Others again provide training, documentation, tools, testing, and so on.

In summary, smartphones are at the heart of a powerful virtuous cycle. Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle. Applications and services which prove their value as add-ons for one generation of smartphones become bundled into the next generation. With this extra built-in functionality, the next generation is intrinsically more attractive, and typically is cheaper too. Developers see an even larger market and increase their efforts to supply software for this market.

3.3 Factors restricting growth in the smartphone market

Decreasing price. Increasing reliability. Increasing stylishness. Increasing word of mouth recommendations. Increasingly useful mobile services. Increasing ecosystem maturity. What could stand in the way of these powerful accelerators?

Plenty.

First, there are technical problems with unexpected difficulty. Some problems turn out to be much harder than initially imagined. For example, consider speech recognition, in which a computer can understand spoken input. When Psion planned the Series 5 family of PDAs in the mid 1990s (as successors to the Series 3 family), we had a strong desire to include speech recognition capabilities in the device. Three “dictaphone style” buttons were positioned in a small unit on the outside of the case, so that the device could be used even when the case (a clamshell) was shut. Over-optimistically, we saw speech recognition as a potential great counter to the pen input mechanisms that were receiving lots of press attention at the time, on competing devices like the Apple Newton and the Palm Pilot. We spoke to a number of potential suppliers of voice recognition software, who assured us that suitably high-performing recognition was “just around the corner”. The next versions of their software, expected imminently, would impress us with its accuracy, they said. Alas, we eventually reached the conclusion that the performance was far too unreliable and would remain so for the foreseeable future – even if we went the extra mile on cost, and included the kind of expensive internal microphone that the suppliers recommended. We feared that “normal users” – the target audience for Psion PDAs – would be perplexed by the all-too-frequent inaccuracies in voice recognition. So we took the decision to remove that functionality. In retrospect, it was a good decision. Even ten years later, voice recognition functionality on smartphones generally fell short of user expectations.

Speech recognition is just one example of a deeply hard technical problem, that turned out to take much longer than expected to make real progress. Others include:

  • Avoiding smartphone batteries being drained too quickly, from all the processing that takes place on the smartphone
  • Enabling rapid search of all the content on a device, regardless of the application used to create that content
  • Devising a set of application programming interfaces which have the right balance between power-of-use and ease-of-use, and between openness and security.

Second, there are “chicken-and-egg” coordination problems – sometimes also known as “the prisoner’s dilemma”. New applications and services in a networked marketplace often depend on related changes being coordinated at several different points in the value chain. Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make less sense when viewed individually. For example, successful mobile phones required both networks and handsets. Successful smartphones required new data-enabled networks, new handsets, and new applications. And so on.

Above, I wrote about the potential for “a powerful virtuous cycle”:

Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle.

However, this only works once the various factors are all in place. A new ecosystem needs to be formed. This involves a considerable coordination problem: several different entities need to un-learn old customs, and adopt new ways of operating, appropriate to the new value chain. That can take a lot of time.

Worse – and this brings me to a third problem – many of the key players in a potential new ecosystem have conflicting business models. Perhaps the new ecosystem, once established, will operate with greater overall efficiency, delivering services to customers more reliably than before. However, wherever there are prospects of cost savings, there are companies who potentially lose out – companies who are benefiting from the present high prices. For example, network operators making healthy profits from standard voice services were (understandably) apprehensive about distractions or interference from low-profit data services running over their networks. They were also apprehensive about risks that applications running on their networks would:

  • Enable revenue bypass, with new services such as VoIP and email displacing, respectively, standard voice calls and text messaging
  • Saturate the network with spam
  • Cause unexpected usability problems on handsets, which the user would attribute to the network operator, entailing extra support costs for the operator.

The outcome of these risks of loss of revenue is that ecosystems might fail to form – or, having formed with a certain level of cooperation, might fail to attain deeper levels of cooperation. Vested interests get in the way of overall progress.

A fourth problem is platform fragmentation. The efforts of would-be innovators are spread across numerous different mobile platforms. Instead of a larger ecosystem all pulling in the same direction, the efforts are diffused, with the risk of confusing and misleading participants. Participants think they can re-apply skills and solutions from one mobile product in the context of another, but subtle and unexpected differences cause incompatibilities which can take a lot time to debug and identify. Instead of collaboration effectively turning 1+1 into 3, confusion turns 1+1 into 0.5.

A fifth problem is poor usability design. Even though a product is powerful, ordinary end users can’t work out how to operate it, or get the best experience from it. They feel alienated by it, and struggle to find their favourite functionality in amongst bewildering masses of layered menu options. A small minority of potential users, known as “technology enthusiasts”, are happy to use the product, despite these usability issues; but they are rare exceptions. As such, the product fails to “cross the chasm” (to use the language of Geoffrey Moore) to the mainstream majority of users.

The sixth problem underlies many of the previous ones: it’s the problem of accelerating complexity. Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos can ensue:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window.
  • Smartphone application development may grow in difficulty, as developers need to juggle different programming interfaces and optimisation methods.
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.

3.4 Factors that can overcome the restrictions and enable faster growth

Technical problems with unexpected difficulty. Chicken-and-egg coordination problems. Conflicting business models. Platform fragmentation. Poor usability design. Accelerating complexity. These are all factors that restrict smartphone progress. Without solving these problems, the latent potential of smartphone technology goes unfulfilled. What can be done about them?

At one level, the answer is: look at the companies who are achieving success with smartphones, despite these problems, and copy what they’re doing right. That’s a good starting point, although it risks being led astray by instances where companies have had a good portion of luck on their side, in addition to progress that they merited through their own deliberate actions. (You can’t jump from the observation that company C1 took action A and subsequently achieved market success, to the conclusion that company C2 should also take action A.) It also risks being led astray by instances where companies are temporarily experiencing significant media adulation, but only as a prelude to an unravelling of their market position. (You can’t jump from the observation that company C3 is currently a media darling, to the conclusion that a continuation of what it is currently doing will achieve ongoing product success.) With these caveats in mind, here is the advice that I offer.

The most important factor to overcome these growth restrictions is expertise – expertise in both design and implementation:

  • Expertise in envisioning and designing products that capture end-user attention and which are enjoyable to use again and again
  • Expertise in implementing an entire end-to-end product solution.

The necessary expertise (both design and implementation) spans eight broad areas:

  1. technology – such as blazing fast performance, network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security
  2. ecosystem design – to solve the “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised
  3. business models – identifying new ways in which groups of companies can profit from adopting new technology
  4. community management – encouraging diverse practitioners to see themselves as part of a larger whole, so that they are keen to contribute
  5. user experience – to ensure that the resulting products will be willingly accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts)
  6. agile project management – to avoid excess wasted investment in cases where project goals change part way through (as they inevitably do, due to the uncertain territory being navigated)
  7. lean thinking – including a bias towards practical simplicity, a profound distrust of unnecessary complexity, and a constant desire to identify and deal with bottleneck constraints
  8. system integration – the ability to pull everything together, in a way that honours the core product proposition, and which enables subsequent further evolution.

To be clear, I see these eight areas of expertise as important for all sectors of complex technology development – not just in the smartphone industry.

Expertise isn’t something that just exists in books. It manifests itself:

  • In individual people, whose knowledge spans different domains
  • In teams – where people can help and support each other, playing to everyone’s strengths
  • In tools and processes – which are the smart embodiment of previous generations of expertise, providing a good environment to work out the next generation of expertise.

In all three cases, the expertise needs to be actively nurtured and enhanced. Companies who under-estimate the extent of the expertise they need, or who try to get that expertise on the cheap – or who stifle that expertise under the constraints of mediocre management – are likely to miss out on the key opportunities provided by smartphone technology. (Just because it might appear that a company finds it easy to do various tasks, it does not follow that these tasks are intrinsically easy to carry out. True experts often make hard tasks look simple.)

But even with substantial expertise available and active, it remains essentially impossible to be sure about the timescales for major new product releases:

  • Novel technology problems can take an indeterminate amount of time to solve
  • Even if the underlying technology progresses quickly, the other factors required to create an end-to-end solution can fall foul of numerous unforeseen delays.

In case that sounds like a depressing conclusion, I’ll end this section with three brighter thoughts:

First, if predictability is particularly important for a project, you can increase your chances of your project hitting its schedule, by sticking to incremental evolutions of pre-existing solutions. That can take you a long way, even though you’ll reduce the chance of more dramatic breakthroughs.

Second, if you can afford it, you should consider running two projects in parallel – one that sticks to incremental evolution, and another that experiments with more disruptive technology. Then see how they both turn out.

Third, the relationship between “speed of technology progress” and “speed of product progress” is more complex than I’ve suggested. I’ve pointed out that the latter can lag the former, especially where there’s a shortage of expertise in fields such as ecosystem management and the creation of business models. However, sometimes the latter can move faster than the former. That occurs once the virtuous cycle is working well. In that case, the underlying technological progress might be exponential, whilst the productisation progress could become super-exponential.

3.5 Successes and shortcomings in predicting the future of technology

We all know that it’s a perilous task to predict the future of technology. The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars. These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary. Indeed, history is littered with curious and amusing examples of flawed predictions of the future. Popular science fiction fares no better:

  • The TV series “Lost in space”, which debuted in 1965, featured a manned spacecraft leaving Earth en route for a distant star, Alpha Centauri, on 16 October 1997.
  • Arthur C Clarke’s “2001: a space odyssey”, made in 1968, featured a manned spacecraft flight to Jupiter.
  • Philip K Dick’s novel “Do Androids Dream of Electric Sheep?”, coincidentally also first published in 1968, described a world set in 1992 in which androids (robots) are extremely hard to distinguish from humans. (Later editions of the novel changed the date to 2021 – the date adopted by the film Bladerunner which was based on the novel.)

Forecasts often go wrong when they spot a trend, and then extrapolate it. Projecting trends into the future is a dangerous game:

  • Skyscrapers rapidly increased in height in the early decades of the 20th century. But after the Empire State Building was completed in 1931, the rapid increases stopped.
  • Passenger aircraft rapidly increased in speed in the middle decades of the 20th century. But after Concorde, which flew its maiden flight in 1969, there have been no more increases.
  • Manned space exploration went at what might be called “rocket pace” from the jolt of Sputnik in 1957 up to the sets of footprints on the Moon in 1969-1972, but then came to an abrupt halt. At the time of writing, there are still no confirmed plans for a manned trip to Mars.

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects. Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on. Just because a technology improved in the past, it does not mean there will be sufficient societal motivation to keep on improving it in the future. Technology is not enough. Especially for changes that are complex and demanding, up to six additional criteria need to be satisfied as well:

  1. The technological development has to satisfy a strong human need.
  2. The development has to be possible at a sufficiently attractive price to individual end users.
  3. The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle.
  4. There must be a clear implementation path whereby the eventual version of the technology can be attained through a series of steps that are, individually, easier to achieve.
  5. When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be open (to accept new ideas).
  6. Likewise, the development process must be commercially attractive, or provide some other strong incentive, to encourage the generation of new ideas, and, even more important, to encourage people to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation.

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications. For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son. The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century. The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world! As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As mentioned earlier, an important factor that can allow accelerating technological progress is the establishment of an operational virtuous cycle that provides positive feedback. Here are four more examples:

  1. The first computers were designed on paper and built by hand. Later computers benefited from computer-aided design and computer-aided manufacture. Even later computers benefit from even better computer-aided design and manufacture…
  2. Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly – including more powerful tools…
  3. More powerful hardware enables new software which enables new use cases which demand more innovation in improving the hardware further…
  4. Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…

A well-functioning virtuous cycle makes it more likely that technological progress can continue. But the biggest factor determining whether a difficult piece of progress occurs is often the degree of society’s motivation towards that progress. Investment in ever-faster passenger airlines ceased, because people stopped perceiving that ever-faster airlines were that important. Manned flight to Mars was likewise deemed to be insufficiently important: that’s why it didn’t take place. The kinds of radical technological progress that I discuss in this book are, I believe, all feasible, provided sufficient public motivation is generated and displayed in support of that progress. This includes major enhancements in health, education, clean energy, artificial general intelligence, human autonomy, and human fulfilment. The powerful public motivation will cause society to prioritise developing and supporting the types of rich expertise that are needed to make this technological progress a reality.

3.6 Moore’s Law: A recap

When I started work at Psion, I was given a “green-screen” console terminal, connected to a PDP11 minicomputer running VAX VMS. That’s how I wrote my first pieces of software for Psion. A short while afterwards, we started using PCs. I remember that the first PC I used had a 20MB hard disk. I also remember being astonished to find that a colleague had a hard disk that was twice as large. What on earth does he do with all that disk space, I wondered. But before long, I had a new PC with a larger hard disk. And then, later, another new one. And so on, throughout my 20+ year career in Psion and Symbian. Each time a new PC arrived, I felt somewhat embarrassed at the apparent excess of computing power it provided – larger disk space, more RAM memory, faster CPU clock speed, etc. On leaving Symbian in October 2009, I bought a new laptop for myself, along with an external USB disk drive. That disk drive was two terabytes in size. For roughly the same amount of money (in real terms) that had purchased 20MB of disk memory in 1989, I could now buy a disk that was 100,000 times larger. That’s broadly equivalent to hard disks doubling in size every 15 months over that 20 year period.

This repeated doubling of performance, on a fairly regular schedule, is a hallmark of what is often called “Moore’s Law”, following a paper published in 1965 by Gordon Moore (subsequently one of the founders of Intel). It’s easy to find other examples of this exponential trend within the computing industry. University of London researcher Shane Legg has published a chart of the increasing power of the world’s fastest supercomputers, from 1960 to the present day, along with a plausible extension to 2020. This chart measures the “FLOPS” capability of each supercomputer – the number of floating point (maths) operations it can execute in a second. The values move all the way from kiloFLOPS through megaFLOPS, gigaFLOPS, teraFLOPS, and petaFLOPS, and point towards exaFLOPS by 2020. Over sixty years, the performance improves through twelve and a half orders of magnitude, which is more than 40 doublings. This time, the doubling period works out at around 17 months.

Radical futurist Ray Kurzweil often uses the following example:

When I was an MIT undergraduate in 1965, we all shared a computer that took up half a building and cost tens of millions of dollars. The computer in my pocket today [a smartphone] is a million times cheaper and a thousand times more powerful. That’s a billion-fold increase in the amount of computation per dollar since I was a student.

A billion-fold increase consists of 30 doublings – which, spread out over 44 years from 1965 to 2009, gives a doubling period of around 18 months. And to get the full picture of the progress, we should include one more observation alongside the million-fold price improvement and thousand-fold processing power improvement: the 2009 smartphone is about one hundred thousand times smaller than the 1965 mainframe.

These steady improvements in computer hardware, spread out over six decades so far, are remarkable, but they’re not the only example of this kind of long-term prodigious increase. Martin Cooper, who has a good claim to be considered the inventor of the mobile phone, has pointed out that the amount of information that can be transmitted over useful radio spectrum has roughly doubled every 30 months since 1897, when Guglielmo Marconi first patented the wireless telegraph:

The rate of improvement in use of the radio spectrum for personal communications has been essentially uniform for 104 years. Further, the cumulative improvement in the effectiveness of personal communications total spectrum utilization has been over a trillion times in the last 90 years, and a million times in the last 45 years

Smartphones have benefited mightily from both Moore’s Law and Cooper’s Law. Other industries can benefit in a similar way too, to the extent that their progress can be driven by semiconductor-powered information technology, rather than by older branches of technology. As I’ll review in later chapters, there are good reasons to believe that both medicine and energy are on the point of dramatic improvements along these lines. For example, the so-called Carlson curves (named after biologist Rob Carlson) track exponential decreases in the costs of both sequencing (reading) and synthesising (writing) base pairs of DNA. It cost about $10 to sequence a single base pair in 1990, but this had reduced to just 2 cents by 2003 (the date of the completion of the human genome project). That’s 9 doublings in just 13 years – making a doubling period of around 17 months.

Moore’s Law and Cooper’s Law are far from being mathematically exact. They should not be mistaken for laws of physics, akin to Newton’s Laws or Maxwell’s Laws. Instead, they are empirical observations, with lots of local deviations when progress temporarily goes either faster or slower than the overall average. Furthermore, scientists and researchers need to keep on investing lots of skill, across changing disciplines, to keep the progress occurring. The explanation given on the website of Martin Cooper’s company, ArrayComm, provides useful insight:

How was this improvement in the effectiveness of personal communication achieved? The technological approaches can be loosely categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum.

How much of the improvement can be attributed to each of these categories? Of the million times improvement in the last 45 years, roughly 25 times were the result of being able to use more spectrum, 5 times can be attributed to the ability to divide the radio spectrum into narrower slices — frequency division. Modulation techniques like FM, SSB, time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so. The remaining sixteen hundred times improvement was the result of confining the area used for individual conversations to smaller and smaller areas — what we call spectrum re-use…

Cooper suggests that his law can continue to hold until around 2050. Experts at Intel say they can foresee techniques to maintain Moore’s Law for at least another ten years – potentially longer. In assessing the wider implications of these laws, we need to consider three questions:

  1. How much technical runway is left in these laws?
  2. Can the benefits of these laws in principle be applied to transform other industries?
  3. Will wider system effects – as discussed earlier in this chapter – frustrate overall progress in these industries (despite the technical possibilities), or will they in due course even accelerate the underlying technical progress?

My answers to these questions:

  1. Plenty
  2. Definitely
  3. It depends on whether we can educate, motivate, and organise a sufficient critical mass of concerned citizens. The race is on!

>> Next chapter >>

16 April 2010

Mobile Developer TV: riffs on the future of technology

Filed under: Barcelona, futurist, Humanity Plus, YouTube — David Wood @ 3:03 pm

On the last day of  the Mobile World Congress (MWC) industry tradeshow in Barcelona a few weeks ago, Ewan MacLeod of Mobile Industry Review and Rafe Blandford of AllAboutSymbian caught up with me.  They explained:

We’re asking people what they see as the highlights of Mobile World Congress.  Would you mind saying a few words to camera?

I have lots of respect for both Ewan and Rafe, so I was happy to respond.  I expressed a few top-of-mind thoughts about Microsoft Windows Phone 7, the networking opportunities at the event itself, and about the growing interest in embedded connectivity (also known as “machine to machine” communications).  The result is here, as Episode 148 of MobileDeveloperTV.com: “David Wood’s take on MWC“:

As you can see, I had the opportunity to say a few words at the end of the clip about the Humanity+ UK2010 event I’ve been organising.  Once the filming stopped, the three of us continued chatting informally about this topic – which is (of course) a big and fascinating topic.  Never someone to miss an opportunity, Ewan started filming again. The first question this time was “What films about the future do you like?”  One answer led on to “just one more question” and then to “a final question” and even “a really final question”…

This became episode 149 of  MobileDeveloperTV.com: “David Wood speculates on the future of (mobile) technology“.  Ewan explains:

I grabbed the opportunity to ask David what his top 3 sci-fi movies were. What follows is an absolutely fascinating ‘real-time’ riff from David on where he sees the future going — in terms of technology augmentation — and what to do about the human race becoming far too reliant on technology that may well turn against us. Or that we simply couldn’t do without.

Many thanks to Ewan and Rafe for taking the time to edit and publish this second video, even though it’s some way outside their normal field of coverage!

15 April 2010

Accelerating automation and the future of work

Filed under: AGI, Economics, futurist, Google, politics, regulation, robots — David Wood @ 2:45 am

London is full of pleasant surprises.

Yesterday evening, I travelled to The Book Club in Shoreditch, EC2A, and made my way to the social area downstairs.  What’s your name? asked the person at the door.  I gave my name, and in return received a stick-on badge saying

Hi, I’m David.

Talk to me about the future of humanity!

I was impressed.  How do they know I like to talk to people about the future of humanity?

Then I remembered that the whole event I was attending was under the aegis of a newly formed group calling itself “Future Human“.  It was their third meeting, over the course of just a few weeks – but the first I had heard about (and decided to attend).  Everyone’s badge had the same message.  About 120 people crammed into the downstairs room – making it standing room only (since there were only around 60 seats).  Apart from the shortage of seats, the event was well run, with good use of roaming mikes from the floor.

The event started with a quick-fire entertaining presentation by author and sci-fi expert Sam Jordison.  His opening question was blunt:

What can you do that a computer can’t do?

He then listed lots of occupations from the past which technology had rendered obsolete.  Since one of my grandfathers was the village blacksmith, I found a personal resonance with this point.  It will soon be the same for many existing professions, Sam said: computers are becoming better and better at all sorts of tasks which previously would have required creative human input.  Journalism is particularly under threat.  Likewise accountancy.  And so on, and so on.

In general terms, that’s a thesis I agree with.  For example, I anticipate a time before long when human drivers will be replaced by safer robot alternatives.

I quibble with the implication that, as existing jobs are automated, there will be no jobs left for humans to do.  Instead, I see that lots of new occupations will become important.  “Shape of Jobs to Come”, a report (PDF) by Fast Future Research, describes 20 jobs that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

(See the original report for explanations of some of these unusual occupation names!)

In other words, as technology improves to remove existing occupations, new occupations will become significant – occupations that build in unpredictable ways on top of new technology.

But only up to a point.  In the larger picture, I agree with Sam’s point that even these new jobs will quickly come under the scope of rapidly improving automation.  The lifetime of occupations will shorten and shorten.  And people will typically spend fewer hours working each week (on paid tasks).

Is this a worry? Yes, if we assume that we need to work long hours, to justify our existence, or to earn sufficient income to look after our families.  But I disagree with these assumptions. Improved technology, wisely managed, should be able to result, not just in less labour left over for humans to do, but also in great material abundance – plenty of energy, food, and other resources for everyone.  We’ll become able – at last – to spend more of our time on activities that we deeply enjoy.

The panel discussion that followed touched on many of these points. The panellists – Peter Kirwan from Wired, Victor Henning from Mendeley, and Carsten Sorensen and Jannis Kallinikos from the London School of Economics – sounded lots of notes of optimism:

  • We shouldn’t create unnecessary distinctions between “human” and “machine”.  After all, humans are kinds of machines too (“meat machines“);
  • The best kind of intelligence combines human elements and machine elements – in what Google have called “hybrid intelligence“;
  • Rather than worrying about computers displacing humans, we can envisage computers augmenting humans;
  • In case computers become troublesome, we should be able to regulate them, or even to switch them off.

Again, in general terms, these are points I agree with.  However, I believe these tasks will be much harder to accomplish than the panel implied. To that extent, I believe that the panel were too optimistic.

After all, if we can barely regulate rapidly changing financial systems, we’ll surely find it even harder to regulate rapidly changing AI systems.  Before we’ve been able to work out if such-and-such an automated system is an improvement on its predecessors, that system may have caused too many rapid irreversible changes.

Worse, there could be a hard-to-estimate “critical mass” effect.  Rapidly accumulating intelligent automation is potentially akin to accumulating nuclear material until it unexpectedly reaches an irreversible critical mass.  The resulting “super cloud” system will presumably state very convincing arguments to us, for why such and such changes in regulations make great sense.  The result could be outstandingly good – but equally, it could be outstandingly bad.

Moreover, it’s likely to prove very hard to “switch off the Internet” (or “switch off Google”).  We’ll be so dependent on the Internet that we’ll be unable to disconnect it, even though we recognise there are bad consequences,

If all of this happens in slow motion, we would be OK.  We’d be able to review it and debug it in real time.  However, the lessons from the recent economic crisis is that these changes can take place almost too quickly for human governments to intervene.  That’s why we need to ensure, ahead of time, that we have a good understanding of what’s happeningAnd that’s why there should be lots more discussions of the sort that took place at Future Human last night.

The final question from the floor raised a great point: why isn’t this whole subject receiving prominence in the current UK general election debates?  My answer: It’s down to those of us who do see the coming problems to ensure that the issues get escalated appropriately.

Footnote: Regular readers will not be surprised if I point out, at this stage, that many of these same topics will be covered in the Humanity+ UK2010 event happening in Conway Hall, Holborn, London, on Saturday 24 April.  The panellists at the Future Human event were good, but I believe that the H+UK speakers will be even better!

31 March 2010

Shorter and sharper: improved video on priorities

Filed under: communications, futurist, Humanity Plus, presentation, YouTube — David Wood @ 1:06 pm

The above video provides context for the Humanity+ UK2010 event happening on 24th April.

It’s the second version of this video.  In the spirit of continuous improvement, this version:

  • Has better audio (I found out how to get my laptop to accept input from a jack mic);
  • Is shorter (it needs to be under 10 minutes in length to be accepted onto YouTube);
  • Has some improved layout and logic.

As a video, it’s still far from perfect!  As you can see, my video creation skills are still rudimentary.  But hopefully people will find the contents interesting.

It’s probably foolhardy of me to try to cover so much material in just 10 minutes.  I’m considering creating a short book on this topic, in order to do fuller justice to these ideas.

Video transcript

In case anyone would prefer a written version of what I said, I append a transcript.  Everyone else can stop reading now.

(Note: this transcript doesn’t match the video exactly, since I ad-libbed here and there.)

My name is David Wood.  I’m going to briefly describe the Humanity+ UK2010 event that will be taking place in London on Saturday 24th April.

As context, let me outline what I’m calling “The Humanity+ Agenda”:

  • This is a proposed set of 20 priorities – 20 items that in my view deserve significantly more attention, analysis, resourcing, and funding, over the coming decade.
  • These priorities are proposed responses to an interlinked set of major challenges that confront society.

The first of these challenges is the threat of environmental catastrophe – lack of clean, sustainable energy and other critical resources.  Second is the threat of economic collapse.  We’re still in the midst of the most serious economic crisis of the last 60 years.  Third is the risk of some fundamentalist terrorists getting their hands on fearsome weapons of mass destruction.  Fourth is a more subtle point: the growing sense of alienation and discontent as individuals all over the world increasingly realise that their own share of possible peak experiences is very limited and transitory.  All this adds up to a radically uncertain future, made all the more challenging due to the need to drastically cut back activities to pay for the ongoing economic crisis.

The single thing that will make the biggest difference to whether we overcome these deep challenges is technology.  Accelerating technology can supply many far-reaching solutions.  But technology cannot stand alone.  Improved technology depends on improved education and improved rationality.  The relationship goes both ways.  There’s another two-way relation with improved health and improved vitality.  Likewise for improved social structure; and for the full expression of human potential.

The 20 priorities fall into these five themes.  These are five areas where there’s already a lot of expenditure – from both government and industry.  But we have to raise our game in each of these areas.  We need to become smarter and more effective in each area.  Rather than “health” I’d like to talk about “super health”, or “health plus”.  Similarly, we need substantially improved education and reasoning ability, substantially improved technology, and substantially improved social structure.  All this will take human experience and capability to a significantly higher level – “Humanity plus”.

So let’s start listing the 20 priorities.  You’ll notice many interconnections.

In the field of Health+, we need to accelerate the progress of preventive medicine.  Fixing medical problems at early stages can be a much more cost effective way of spending a limited health budget.  Healthy individuals contribute to society more, rather than being a drain on its resources.  Going further, the slogan “better than well” should also become a priority.  People with exceptional levels of fitness, strength, perseverance, and vitality, can contribute even more to society.

Anti-aging treatments are an important special case of the previous priorities.  Many diseases are exacerbated because our bodies have accumulated different kinds of damage over the years – which we call “aging”.  Systematically removing or repairing this damage will have many benefits.

Education+ refers to people improving their skillsets and reasoning ability, all throughout their lives.  Behavioural psychology is pointing out many kinds of irrational bias in how all of us reach decisions.  We all need help in identifying and overcoming these biases.

One example is the undue influence that fundamentalist thinking can hold over people – when dogma from “scripture” or “tradition” or a “prophet” overrides the conclusions of rational debate.  The world is, today, too dangerous a place to allow dogma-driven people to hold positions of great power.

An important part of freeing people from limited thinking is to boost education about the status of accelerating technology – covering the opportunities, risks, context, and options.

Another way we can become smarter – and more sociable – is via cognitive enhancement and intelligence augmentation.  This includes drugs that improve our thinking and/or our mood, and silicon accompaniments to our biological brains.  Being connected to the Internet, via the likes of Google and Wikipedia, already boosts our knowledge significantly.

Before long, we could have at our fingertips access to Artificial General Intelligence, whereby computers can provide first class answers to tough questions that previously eluded even the smartest teams of people.  For example, I expect that many cures for diseases will be developed in collaboration with increasingly intelligent silicon super-brains.

That takes us to Technology+, the set of technologies underpinning the other changes I am describing.  Improved robots could provide unmatched precision and manual dexterity, as well as great diligence and power.

Nanotechnology could enable the creation of highly useful new materials, compounds, and tools.  Synthetic biology, in turn, could apply techniques from manufacturing and software to create new biological forms, with huge benefits for health, food, energy, and more.  Research into large-scale clean energy could finally solve our energy sustainability issues.  And underpinning all these technologies should be new generations of ICT – information and communications technologies, especially improvements in software.

But technology requires support from society in order to advance quickly and wisely.  Under the heading “Society+” I identify four priority areas: patent system reform, smart market regulation, the expansion of the domain of collaborative voluntary enterprise, and vibrant democratic involvement and oversight, which enables an inclusive open discussion on the best way to manage the future.

Finally, under the heading “Humanity+” we have three priorities: expansion of human choice and autonomy, developing new ways of measuring human accomplishment – that avoid the well-known drawbacks of purely economic measurements – and “geo-engineering capability”.  I’m reminded of the recent statement by veteran ecologist Stewart Brand: “We are as gods, and HAVE to get good at it”.  It’s a frightening responsibility, but there is no alternative.

In summary, 20 interlinked priority areas in five themes: health+, education+, technology+, society+, and humanity+.  In each case, we must reach new levels of achievement.  Happily, we have in our hands the means to do so.  But let’s not imagine that things will be easy.  The next 10-20 years will probably be the most critical in the history of humanity.

In the midst of great difficulties, we’ll no doubt be sorely tempted by six dangerous distractions.

First is the idea that human progress is somehow inevitable, as if governed by some kind of cosmic law.  Alas, I see nothing pre-determined.  We need to become activists, rather than passive bystanders.

Second is the idea that the free market economy, if set up properly and then left to its own devices, will automatically generate the kinds of improvement in technology and product that I am talking about.  Sorry, although markets have been a powerful force for development over history, they’re far from perfect.

Nature – and evolution by natural selection – is another force which has accomplished a great deal, but which is far from optimal.  Nature is full of horrors as well as beauty.  Humans have been augmenting nature with enhancements from technology from before the beginning of recorded history.  This process absolutely needs to continue.

Risk aversion is another dangerous temptation.  Yet if we do nothing, we’re going to be in significant trouble anyway.  Either way, we can’t avoid risk – we just have to become better at evaluating it and managing it.

Next on this list is religion – any view that all the important answers have already been revealed.  I see religion as akin to several of the other temptations on this list: it has achieved a great deal in the past, but is far from being the sole guide to what we must do next.

Last on this list is humanism – the idea that humans, with our present set of attributes and skills, will be sufficient to build the best possible future environment.  However, present-day humans are no more the end point of progress than were simians – monkeys – or mammals.  In my view, it is only the significantly enhanced humans of the near future who will, collectively, be able to guide society and civilisation to reach our true potential.

We can succeed by progress, not by standing still.  We can succeed by transcending nature with enhanced technology, and by restructuring society in ways more favourable to innovation, collaboration, choice, and participation.

If these ideas strike you as interesting, one way you can continue the discussion is at the Humanity+ UK2010 event, on the 24th of April.  This will be held in Conway Hall, in Holborn, London.  You can register for the event at the website humanityplus dash uk dot com.  There will be 10 speakers, including many of the pioneering thinkers of the modern transhumanist or Humanity+ movement.

  • In the morning, the key speakers are Max More, Anders Sandberg, and Rachel Armstrong.
  • After lunch, the speakers will be Aubrey de Grey, David Pearce, and Amon Twyman.
  • Later in the afternoon, we’ll hear from Natasha Vita-More, David Orban, and Nick Bostrom.

You can find more details on the conference website.  If you’re quick, you may also be able to book one of the few remaining places at the post-event dinner, where all the speakers will be attending.  I hope to see you there.

I look forward to continuing this important discussion!

28 March 2010

A video experiment: 20 priorities

Filed under: communications, futurist, Humanity Plus, presentation, UKH+ — David Wood @ 9:38 am


Video: 20 priorities for the coming decade

The video linked above is my attempt to address several different requirements:

  1. To follow up some ideas about the list of priorities I mentioned previously, tentatively named “The Humanity+ Agenda”;
  2. To find an interesting new way to help publicise the forthcoming (April 24th) “Humanity+ UK2010” event;
  3. To experiment with creating videos, to use for communications purposes, as a complement to textual blog posts.

As you can see, it’s based on Powerpoint – a tool I know well.

What I didn’t appreciate about Powerpoint, before, is the fact that you can embed an audio narrative, to playback automatically as the slides and animations progress.  So that’s what I decided to do.

First time round, I tried to ad lib remarks, as I progressed through the slides, but that didn’t work well.  Next, I wrote down an entire script, and read from that.  The result is a bit flat and jaded in places, and there are a few too many verbal fluffs for my liking.  When I try this again, I’ll set aside more time, and make myself re-do the narration for a slide each time I fluff a few words.

I also hit some bugs (and quirks) when using the “Record narration” features of PowerPoint.  Some of these seem to be known features, but not all:

  • A few seconds of the narration often gets truncated from the end of each slide.  The workaround is to wait three seconds after finishing speaking, before advancing to the next slide;
  • The audio quality for the first slide was very crackly every time, not matter what I tried.  The workaround is to insert an extra “dummy” slide at the beginning, and to discard that slide before publishing;
  • There’s a pair of loud audible cracks at the start of each slide.  I don’t know any workaround for that;
  • Some of the timing, during playback, is slightly out of synch with what I recorded: animations on screen sometimes happen a few seconds before the accompanying audio stream is ready for them.

I used authorSTREAM as the site to store the presentation.  They offer the following features:

  • Support for playback of presentations containing audio narration;
  • Support for converting the presentation into video format.

The authorSTREAM service looks promising – I expect to use it again!

Footnote: I’ll update this posting shortly, with a copy of the video embedded, rather than linked.  (I still find video embedding to be a bit of a hit-or-miss process…)

15 March 2010

Imagining a world without money

Filed under: Economics, futurist, motivation, politics, Singularity, vision, Zeitgeist — David Wood @ 11:48 am

On Saturday, I attended “London Z Day 2010” – described as

presentations about futurism and technology, the singularity and the current economic landscape, activism and how to get involved…

Around 300 people were present in the Oliver Thompson Lecture Theatre of London’s City University.  That’s testimony to good work by the organisers – the UK chapter of the worldwide “Zeitgeist Movement“.

I liked a lot of what I heard – a vision that advocates greater adoption of:

  • Automation: “Using technology to automate repetitive and tedious tasks leads to efficiency and productivity. It is also socially responsible as people are freed from labor that undermines their intelligence”
  • Artificial intelligence: “machines can take into account more information”
  • The scientific method: “a proven method that has stood the test of time and leads to discovery. Scientific method involves testing, getting feedback from natural world and physical law, evaluation of results, sharing data openly and requirement to replicate the test results”
  • Technological unification: “Monitoring planetary resources is needed in order to create an efficient system, and thus technology should be shared globally”.

I also liked the sense of urgency and activism, to move swiftly from the current unsustainable social and economic frameworks, into a more rational framework.  Frequent references of work of radical futurists like Ray Kurzweil emphasised the plausibility of rapid change, driven by accelerating technological innovation.  That makes good sense.

I was less convinced by other parts of the Zeitgeist worldview – in particular, its strong “no money” and “no property” messages.

Could a society operate without money?  Speakers from the floor seemed to think that, in a rationally organised society, everyone would be able to freely access all the goods and services they need, rather than having to pay for them.  The earth has plenty of resources, and we just need to look after them in a sensible way.  Money has lots of drawbacks, so we should do without it – so the argument went.

One of the arguments made by a speaker, against a monetary basis of society, was the analysis from the recent book “The Spirit Level: Why More Equal Societies Almost Always Do Better” by Richard Wilkinson and Kate Pickett.  Here’s an excerpt of a review of this book from the Guardian:

We are rich enough. Economic growth has done as much as it can to improve material conditions in the developed countries, and in some cases appears to be damaging health. If Britain were instead to concentrate on making its citizens’ incomes as equal as those of people in Japan and Scandinavia, we could each have seven extra weeks’ holiday a year, we would be thinner, we would each live a year or so longer, and we’d trust each other more.

Epidemiologists Richard Wilkinson and Kate Pickett don’t soft-soap their message. It is brave to write a book arguing that economies should stop growing when millions of jobs are being lost, though they may be pushing at an open door in public consciousness. We know there is something wrong, and this book goes a long way towards explaining what and why.

The authors point out that the life-diminishing results of valuing growth above equality in rich societies can be seen all around us. Inequality causes shorter, unhealthier and unhappier lives; it increases the rate of teenage pregnancy, violence, obesity, imprisonment and addiction; it destroys relationships between individuals born in the same society but into different classes; and its function as a driver of consumption depletes the planet’s resources.

Wilkinson, a public health researcher of 30 years’ standing, has written numerous books and articles on the physical and mental effects of social differentiation. He and Pickett have compiled information from around 200 different sets of data, using reputable sources such as the United Nations, the World Bank, the World Health Organisation and the US Census, to form a bank of evidence against inequality that is impossible to deny.

They use the information to create a series of scatter-graphs whose patterns look nearly identical, yet which document the prevalence of a vast range of social ills. On almost every index of quality of life, or wellness, or deprivation, there is a gradient showing a strong correlation between a country’s level of economic inequality and its social outcomes. Almost always, Japan and the Scandinavian countries are at the favourable “low” end, and almost always, the UK, the US and Portugal are at the unfavourable “high” end, with Canada, Australasia and continental European countries in between.

This has nothing to do with total wealth or even the average per-capita income. America is one of the world’s richest nations, with among the highest figures for income per person, but has the lowest longevity of the developed nations, and a level of violence – murder, in particular – that is off the scale. Of all crimes, those involving violence are most closely related to high levels of inequality – within a country, within states and even within cities. For some, mainly young, men with no economic or educational route to achieving the high status and earnings required for full citizenship, the experience of daily life at the bottom of a steep social hierarchy is enraging…

The anxiety in this book about our current economic system was reflected in anxiety expressed by all the Zeitgeist Movement speakers.  However, the Zeitgeist speakers drew a more radical conclusion.  It’s not just that economic inequalities have lots of bad side effects.  They say, it’s money-based economics itself that causes these problems.  And that’s a hard conclusion to swallow.

They don’t argue for reforming the existing economic system.  Rather, they argue for replacing it completely.  Money itself, they say, is the root problem.

The same dichotomy arose time and again during the day.  Speakers highlighted many problems with the way the world currently operates.  But instead of advocating incremental reforms – say, for greater equality, or for oversight of the market – they advocated a more radical transformation: no money, and no property.  What’s more, the audience seemed to lap it all up.

Of course, money has sprung up in countless societies throughout history, as something that allows for a more efficient exchange of resources than simple bartering.  Money provides a handy intermediate currency, enabling more complex transactions of goods and services.

In answer, the Zeitgeist speakers argue that use of technology and artificial intelligence would allow for more sensible planning of these goods and services.  However, horrible thoughts come to mind of all the failures of previous centrally controlled economies, such as in Soviet times.  In answer again, the Zeitgeist speakers seem to argue that better artificial intelligence will, this time, make a big difference.  Personally, I’m all in favour of gradually increased application of improved automatic decision systems.  But I remain deeply unconvinced about removing money:

  1. Consumer desires can be very varied.  Some people particularly value musical instruments, others foreign travel, others sports equipment, others specialist medical treatment, and so on.  What’s more, the choices are changing all the time.  Money is a very useful means for people to make their own, individual choices
  2. A speaker from the floor suggested that everyone would have access to all the medical treatment they needed.  That strikes me as naive: the amount of medical treatment potentially available (and potentially “needed” in different cases) is unbounded
  3. Money-based systems enable the creation of loans, in which banks lend out more money than they have in their assets; this has downsides but also has been an important spring to growth and development;
  4. What’s more, without the incentive of being able to earn more money, it’s likely that a great deal of technological progress would slow down; many people would cease to work in such a focused and determined way to improve the products their company sells.

For example, the Kurzweil curves showing the projected future improvements in technology – such as increased semiconductor density and computational capacity – will very likely screech to a halt, or dramatically slow down, if money is removed as an incentive.

So whilst the criticism offered by the Zeitgeist movement is strong, the positive solution they advocate lacks many details.

As Alan Feuer put it, in his New York Times article reviewing last year’s ZDay, “They’ve Seen the Future and Dislike the Present“:

The evening, which began at 7 with a two-hour critique of monetary economics, became by midnight a utopian presentation of a money-free and computer-driven vision of the future, a wholesale reimagination of civilization, as if Karl Marx and Carl Sagan had hired John Lennon from his “Imagine” days to do no less than redesign the underlying structures of planetary life.

Idealism can be a powerful force for positive social change, but can be deeply counterproductive if it’s based on a misunderstanding of what’s possible.  I’ll need a lot more convincing about the details of the zero-money “resource based economy” advocated by Zeitgeist before I could give it any significant support.

I’m a big fan of debating ideas about the future – especially radical and counter-intuitive ideas.  There’s no doubt that, if we are to survive, the future will need to be significantly different from the past.  However, I believe we need to beware the kind of certainty that some of the Zeitgeist speakers showed.  The Humanity+, UK2010 conference, to be held in London on 24th April, will be an opportunity to review many different ideas about the best actions needed to create a social environment more conducive to enabling the full human potential.

Footnote: an official 86 page PDF “THE ZEITGEIST MOVEMENT – OBSERVATIONS AND RESPONSES: Activist Orientation Guide” is available online.

The rapid growth of the Zeitgeist Movement has clearly benefited from popular response to two movies, “Zeitgeist, the Movie” (released in 2007) and “Zeitgeist: Addendum” (released in 2008).  Both these movies have gone viral.  There’s a great deal in each of these movies that makes me personally uncomfortable.  However, one learning is simply the fact that well made movies can do a great deal to spread a message.

For an interesting online criticism of some of the Zeitgeist Movements ideas, see “Zeitgeist Addendum: The Review” by Stefan Molyneux from Freedomain Radio.

10 March 2010

Speaking in Oxford: Far beyond smartphones

Filed under: disruption, futurist, Oxford — David Wood @ 9:32 am

Tomorrow evening (Thursday 11th March) I’ll be speaking in the Saskatchewan Room of Exeter College, Oxford, starting at 7pm.

I’ll be helping to lead a discussion at the recently formed “Oxford Transhumanists” group.

The event is described as follows on Facebook:

Far beyond smartphones

A transhumanist view of where the accelerating pace of technology is taking us

Technological improvements in fields such as semiconductors, software, AI, nanotech, and synthetic biology, over the next 20 years, open opportunities for radical changes in the human condition – at both the personal and societal levels.

Are these prospects a fantasy, or something to be feared, or something to be embraced?

This talk provides an introduction to disruptive but deeply important concepts such as artificial general intelligence, human rejuvenation engineering, intelligence augmentation, exponentially accelerated change, and the technological singularity.

These concepts involve large potential downsides as well as large potential upsides. It’s critical that we anticipate these issues ahead of time.

During the meeting, there will be plenty of opportunity to raise questions and to contribute to the debate.

In case you happen to be near Oxford that evening, and the above topics interest you, feel free to join the meeting!

2 March 2010

Major new challenges to receive X PRIZE backing

Filed under: catalysts, challenge, futurist, Genetic Engineering, Google, grants, innovation, medicine, space — David Wood @ 7:16 pm

The X PRIZE Foundation has an audacious vision.

On its website, it describes itself as follows:

The X PRIZE Foundation is an educational nonprofit organization whose mission is to create radical breakthroughs for the benefit of humanity thereby inspiring the formation of new industries, jobs and the revitalization of markets that are currently stuck

The foundation can point to the success of its initial prize, the Ansari X PRIZE.  This was a $10M prize to be awarded to the first non-government organization to launch a reusable manned spacecraft into space twice within two weeks.  This prize was announced in May 1996 and was won in October 2004, by the Tier One project using the experimental spaceplane SpaceShipOne.

Other announced prizes are driving research and development in a number of breakthrough areas:


The Archon X PRIZE will award $10 million to the first privately funded team to accurately sequence 100 human genomes in just 10 days.  Renowned physicist Stephen Hawking explains his support for this prize:

You may know that I am suffering from what is known as Amyotrophic Lateral Sclerosis (ALS), or Lou Gehrig’s Disease, which is thought to have a genetic component to its origin. It is for this reason that I am a supporter of the $10M Archon X PRIZE for Genomics to drive rapid human genome sequencing. This prize and the resulting technology can help bring about an era of personalized medicine. It is my sincere hope that the Archon X PRIZE for Genomics can help drive breakthroughs in diseases like ALS at the same time that future X PRIZEs for space travel help humanity to become a galactic species.

The Google Lunar X PRIZE is a $30 million competition for the first privately funded team to send a robot to the moon, travel 500 meters and transmit video, images and data back to the Earth.  Peter Diamandis, Chairman and CEO of the X PRIZE Foundation, provided some context in a recent Wall Street Journal article:

Government agencies have dominated space exploration for three decades. But in a new plan unveiled in President Barack Obama’s 2011 budget earlier this month, a new player has taken center stage: American capitalism and entrepreneurship. The plan lays the foundation for the future Google, Cisco and Apple of space to be born, drive job creation and open the cosmos for the rest of us.

Two fundamental realities now exist that will drive space exploration forward. First, private capital is seeing space as a good investment, willing to fund individuals who are passionate about exploring space, for adventure as well as profit. What was once affordable only by nations can now be lucrative, public-private partnerships.

Second, companies and investors are realizing that everything we hold of value—metals, minerals, energy and real estate—are in near-infinite quantities in space. As space transportation and operations become more affordable, what was once seen as a wasteland will become the next gold rush. Alaska serves as an excellent analogy. Once thought of as “Seward’s Folly” (Secretary of State William Seward was criticized for overpaying the sum of $7.2 million to the Russians for the territory in 1867), Alaska has since become a billion-dollar economy.

The same will hold true for space. For example, there are millions of asteroids of different sizes and composition flying throughout space. One category, known as S-type, is composed of iron, magnesium silicates and a variety of other metals, including cobalt and platinum. An average half-kilometer S-type asteroid is worth more than $20 trillion.

Technology is reaching a critical point. Moore’s Law has given us exponential growth in computing technology, which has led to exponential growth in nearly every other technological industry. Breakthroughs in rocket propulsion will allow us to go farther, faster and more safely into space…

The Progressive Automotive X PRIZE seeks “to inspire a new generation of viable, safe, affordable and super fuel efficient vehicles that people want to buy“.  $10 million in prizes will be awarded in September 2010 to the teams that win a rigorous stage competition for clean, production-capable vehicles that exceed 100 MPG energy equivalent (MPGe).  Over 40 teams from 11 countries are currently entered in the competition.

Forthcoming new X PRIZEs

The best may still be to come.

It now appears that a series of new X PRIZEs are about to be announced.  CNET News writer Daniel Terdiman reports a fascinating interview with Peter Diamandis, in his article “X Prize group sets sights on next challenges (Q&A)“.

The article is well worth reading in its entirety.  Here are just a few highlights:

On May 15, at a gala fundraising event to be held at George Lucas’ Letterman Digital Arts Center in San Francisco, X Prize Foundation Chairman and CEO Peter Diamandis, along with Google founders Larry Page and Sergey Brin, and “Avatar” director James Cameron, will unveil their five-year vision for the famous awards…

The foundation …  is focusing on several potential new prizes that could change the world of medicine, oceanic exploration, and human transport.

The first is the so-called AI Physician X Prize, which will go to a team that designs an artificial intelligence system capable of providing a diagnosis equal to or better than 10 board-certified doctors.

The second is the Autonomous Automobile X Prize, which will go to the first team to design a car that can beat a top-seeded driver in a Gran Prix race.

The third would go to a team that can generate an organ from a terminal patient’s stem cells, transplant the organ [a lung, liver, or heart] into the patient, and have them live for a year.

And the fourth would reward a team that can design a deep-sea submersible capable of allowing scientists to gather complex data on the ocean floor

Diamandis  explains the potential outcome of the AI Physician Prize:

The implications of that are that by the end of 2013, 80 percent of the world’s populace will have a cell phone, and anyone with a cell phone can call this AI and the AI can speak Mandarin, Spanish, Swahili, any language, and anyone with a cell phone then has medical advice at the level of a board certified doctor, and it’s a game change.

Even more new X PRIZEs

Details of the process of developing new X PRIZEs are described on the foundation’s website.  New X PRIZEs are are guided by the following principles:

  • We create prizes that result in innovation that makes a lasting impact. Although a technological breakthrough can meet this criterion, so do prizes which inspire teams to use existing technologies, knowledge or systems in more effective ways.
  • Prizes are designed to generate popular interest through the prize lifecycle: enrollment, competition, attempts (both successful and unsuccessful) and post-completion…
  • Prizes result in financial leverage. For a prize to be successful, it should generate outside investment from competitors at least 5-10 times the prize purse size. The greater the leverage, the better return on investment for our prize donors and partners.
  • Prizes incorporate both elements of technological innovation as well as successful “real world” deployment. An invention which is too costly or too inconvenient to deploy widely will not win a prize.
  • Prizes engage multidisciplinary innovators which would otherwise be unlikely to tackle the problems that the prize is designed to address.

The backing provided to the foundation by the Google founders and by James Cameron provides added momentum to what is already an inspirational initiative and a great catalyst for innovation.

31 January 2010

In praise of hybrid AI

Filed under: AGI, brain simulation, futurist, IA, Singularity, UKH+, uploading — David Wood @ 1:28 am

In his presentation last week at the UKH+ meeting “The Friendly AI Problem: how can we ensure that superintelligent AI doesn’t terminate us?“, Roko Mijic referred to the plot of the classic 1956 science fiction film “Forbidden Planet“.

The film presents a mystery about events at a planet, Altair IV, situated 16 light years from Earth:

  • What force had destroyed nearly every member of a previous spacecraft visiting that planet?
  • And what force had caused the Krell – the original inhabitants of Altair IV – to be killed overnight, whilst at the peak of their technological powers?

A 1950’s film might be expected to point a finger of blame at nuclear weapons, or other weapons of mass destruction.  However, the problem turned out to be more subtle.  The Krell had created a machine that magnified the power of their own thinking, and acted on that thinking.  So the Krells all became even more intelligent and more effective than before.  You may wonder, what’s the problem with that?

A 2002 Steven B. Harris article in the Skeptic magazine, “The return of the Krell Machine: Nanotechnology, the Singularity, and the Empty Planet Syndrome“, takes up the explanation, quoting from the film.  The Krell had created:

a big machine, 8000 cubic miles of klystron relays, enough power for a whole population of creative geniuses, operated by remote control – operated by the electromagnetic impulses of individual Krell brains… In return, that machine would instantaneously project solid matter to any point on the planet. In any shape or color they might imagine. For any purpose…! Creation by pure thought!

But … the Krell forgot one deadly danger – their own subconscious hate and lust for destruction!

And so, those mindless beasts of the subconscious had access to a machine that could never be shut down! The secret devil of every soul on the planet, all set free at once, to loot and maim! And take revenge… and kill!

Researchers at the Singularity Institute for Artificial Intelligence (SIAI) – including Roko – give a lot of thought to the general issue of unintended consequences of amplifying human intelligence.  Here are two ways in which this amplification could go disastrously wrong:

  1. As in the Forbidden Planet scenario, this amplification could unexpectedly magnify feelings of ill-will and negativity – feelings which humans sometimes manage to suppress, but which can still exert strong influence from time to time;
  2. The amplication could magnify principles that generally work well in the usual context of human thought, but which can have bad consequences when taken to extremes.

As an example of the second kind, consider the general principle that a free market economy of individuals and companies who pursue an enlightened self-interest, frequently produces goods that improve overall quality of life (in addition to generating income and profits).  However, magnifying this principle is likely to result in occasional disastrous economic crashes.  A system of computers that were programmed to maximise income and profits for their owners could, therefore, end up destroying the economy.  (This example is taken from the book “Beyond AI: Creating the Conscience of the Machine” by J. Storrs Hall.  See here for my comments on other ideas from that book.)

Another example of the second kind: a young, fast-rising leader within an organisation may be given more and more responsibility, on account of his or her brilliance, only for that brilliance to subsequently push the organisation towards failure if the general “corporate wisdom” is increasingly neglected.  Likewise, there is the risk of a new  supercomputer impressing human observers (politicians, scientists, and philosophers alike, amongst others) by the brilliance of its initial recommendations for changes in the structure of human society.  But if operating safeguards are removed (or disabled – perhaps at the instigation of the supercomputer itself) we could find that the machine’s apparent brilliance results in disastrously bad decisions in unforeseen circumstances.  (Hmm, I can imagine various writers calling for the “deregulation of the supercomputer”, in order to increase the income and profit it generates – similar to the way that many people nowadays are still resisting any regulation of the global financial system.)

That’s an argument for being very careful to avoid abdicating human responsibility for the oversight and operation of computers.  Even if we think we have programmed these systems to observe and apply human values, we can’t be sure of the consequences when these systems gain more and more power.

However, as our computer systems increase their speed and sophistication, it’s likely to prove harder and harder for comparatively slow-brained humans to be able to continue meaningfully cross-checking and monitoring the arguments raised by the computer systems in favour of specific actions.  It’s akin to humans trying to teach apes calculus, in order to gain approval from apes for how much thrust to apply in a rocket missile system targeting a rapidly approaching earth-threatening meteorite.  The computers may well decide that there’s no time to try to teach us humans the deeply complex theory that justifies whatever urgent decision they want to take.

And that’s a statement of the deep difficulty facing any “Friendly AI” program.

There are, roughly speaking, five possible ways people can react to this kind of argument.

The first response is denial – people say that there’s no way that computers will reach the level of general human intelligence within the foreseeable future.  In other words, this whole discussion is seen as being a fantasy.  However, it comes down to a question of probability.  Suppose you’re told that there’s a 10% chance that the airplane you’re about to board will explode high in the sky, with you in it.  10% isn’t a high probability, but since the outcome is so drastic, you would probably decide this is a risk you need to avoid.  Even if there’s only a 1% chance of the emergence of computers with human-level intelligence in (say) the next 20 years, it’s something that deserves serious further analysis.

The second response is to seek to stop all research into AI, by appeal to a general “precautionary principle” or similar.  This response is driven by fear.  However, any such ban would need to apply worldwide, and would surely be difficult to police.  It’s too hard to draw the boundary between “safe computer science” and “potentially unsafe computer science” (the latter being research that could increase the probability of the emergence of computers with human-level intelligence).

The third response is to try harder to design the right “human values” into advanced computer systems.  However, as Roko argued in his presentation, there is enormous scope for debating what these right values are.  After all, society has been arguing over human values since the beginning of recorded history.  Existing moral codes probably all have greater or lesser degrees of internal tension or contradiction.  In this context, the idea of “Coherent Extrapolated Volition” has been proposed:

Our coherent extrapolated volition is our choices and the actions we would collectively take if we knew more, thought faster, were more the people we wished we were, and had grown up closer together.

As noted in the Wikipedia article on Friendly Artificial Intelligence,

Eliezer Yudkowsky believes a Friendly AI should initially seek to determine the coherent extrapolated volition of humanity, with which it can then alter its goals accordingly. Many other researchers believe, however, that the collective will of humanity will not converge to a single coherent set of goals even if “we knew more, thought faster, were more the people we wished we were, and had grown up closer together.”

A fourth response is to adopt emulation rather than design as the key principle for obtaining computers with human-level intelligence.  This involves the idea of “whole brain emulation” (WBE), with a low-level copy of a human brain.  The idea is sometimes also called “uploads” since the consciousness of the human brain may end up being uploaded onto the silicon emulation.

Oxford philosopher Anders Sandberg reports on his blog how a group of Singularity researchers reached a joint conclusion, at a workshop in October following the Singularity Summit, that WBE was a safer route to follow than designing AGI (Artificial General Intelligence):

During the workshop afterwards we discussed a wide range of topics. Some of the major issues were: what are the limiting factors of intelligence explosions? What are the factual grounds for disagreeing about whether the singularity may be local (self-improving AI program in a cellar) or global (self-improving global economy)? Will uploads or AGI come first? Can we do anything to influence this?

One surprising discovery was that we largely agreed that a singularity due to emulated people… has a better chance given current knowledge than AGI of being human-friendly. After all, it is based on emulated humans and is likely to be a broad institutional and economic transition. So until we think we have a perfect friendliness theory we should support WBE – because we could not reach any useful consensus on whether AGI or WBE would come first. WBE has a somewhat measurable timescale, while AGI might crop up at any time. There are feedbacks between them, making it likely that if both happens it will be closely together, but no drivers seem to be strong enough to really push one further into the future. This means that we ought to push for WBE, but work hard on friendly AGI just in case…

However, it seems to me that the above “Forbidden Planet” argument identifies a worry with this kind of approach.  Even an apparently mild and deeply humane person might be playing host to “secret devils” – “their own subconscious hate and lust for destruction”.  Once the emulated brain starts running on more powerful hardware, goodness knows what these “secret devils” might do.

In view of the drawbacks of each of these four responses, I end by suggesting a fifth.  Rather than pursing an artificial intelligence which would run separately from a human intelligence, we should explore the creation of hybrid intelligence.  Such a system involves making humans smarter at the same time as the computer systems become smarter.  The primary source for this increased human smartness is closer links with the ever-improving computer systems.

In other words, rather than just talking about AI – Artificial Intelligence – we should be pursuing IA – Intelligence Augmentation.

For a fascinating hint about the benefits of hybrid AI, consider the following extract from a recent article by former world chess champion Garry Kasparov:

In chess, as in so many things, what computers are good at is where humans are weak, and vice versa. This gave me an idea for an experiment. What if instead of human versus machine we played as partners? My brainchild saw the light of day in a match in 1998 in León, Spain, and we called it “Advanced Chess.” Each player had a PC at hand running the chess software of his choice during the game. The idea was to create the highest level of chess ever played, a synthesis of the best of man and machine.

Although I had prepared for the unusual format, my match against the Bulgarian Veselin Topalov, until recently the world’s number one ranked player, was full of strange sensations. Having a computer program available during play was as disturbing as it was exciting. And being able to access a database of a few million games meant that we didn’t have to strain our memories nearly as much in the opening, whose possibilities have been thoroughly catalogued over the years. But since we both had equal access to the same database, the advantage still came down to creating a new idea at some point…

Even more notable was how the advanced chess experiment continued. In 2005, the online chess-playing site Playchess.com hosted what it called a “freestyle” chess tournament in which anyone could compete in teams with other players or computers. Normally, “anti-cheating” algorithms are employed by online sites to prevent, or at least discourage, players from cheating with computer assistance. (I wonder if these detection algorithms, which employ diagnostic analysis of moves and calculate probabilities, are any less “intelligent” than the playing programs they detect.)

Lured by the substantial prize money, several groups of strong grandmasters working with several computers at the same time entered the competition. At first, the results seemed predictable. The teams of human plus machine dominated even the strongest computers. The chess machine Hydra, which is a chess-specific supercomputer like Deep Blue, was no match for a strong human player using a relatively weak laptop. Human strategic guidance combined with the tactical acuity of a computer was overwhelming.

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

The terminology “Hybrid Intelligence” was used in a recent presentation at the University of Washington by Google’s VP of Research & Special Initiatives, Alfred Z. Spector.  My thanks to John Pagonis for sending me a link to a blog post by Greg Linden which in turn provided commentary on Al Spector’s talk:

What was unusual about Al’s talk was his focus on cooperation between computers and humans to allow both to solve harder problems than they might be able to otherwise.

Starting at 8:30 in the talk, Al describes this as a “virtuous cycle” of improvement using people’s interactions with an application, allowing optimizations and features like like learning to rank, personalization, and recommendations that might not be possible otherwise.

Later, around 33:20, he elaborates, saying we need “hybrid, not artificial, intelligence.” Al explains, “It sure seems a lot easier … when computers aren’t trying to replace people but to help us in what we do. Seems like an easier problem …. [to] extend the capabilities of people.”

Al goes on to say the most progress on very challenging problems (e.g. image recognition, voice-to-text, personalized education) will come from combining several independent, massive data sets with a feedback loop from people interacting with the system. It is an “increasingly fluid partnership between people and computation” that will help both solve problems neither could solve on their own.

I’ve got more to say about Al Spector’s talk – but I’ll save that for another day.

Footnote: Anders Sandberg is one of the confirmed speakers for the Humanity+, UK 2010 event happening in London on 24th April.  His chosen topic has several overlaps with what I’ve discussed above:

« Newer PostsOlder Posts »

Blog at WordPress.com.