dw2

19 May 2010

Chapter finished: A journey with technology

Five more days have passed, and I’ve completed another chapter draft (see snapshot below) of my proposed new book.

This takes me up to 30% of what I hope to write:

  • I’ve drafted three out of ten planned chapters.
  • The wordcount has reached 15,000, out of a planned total of 50,000.

After this, I plan to dig more deeply into specific technology areas.  I’ll be moving further out of my comfort area.  First will be “Health”.  Fortuitously, I spent today at an openMIC meeting in Bath, entitled “i-Med: Serious apps for mobile healthcare”.  That provided me with some useful revision!

========

3. A journey with technology

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

Here’s the key question I want to start answering in this chapter: how quickly can technology progress in the next few decades?

This is far from being an academic question. At heart, I want to know whether it’s feasible for that progress to be quick enough to provide technological solutions to the calamitous issues and huge opportunities described in the first chapter of this book. The progress must be quick enough, not only for core technological research, but also for productisation of that technology into the hands of billions of consumers worldwide.

For most of this book, I’ll be writing about technologies from an external perspective. I have limited direct experience with, for example, the healthcare industry and the energy industry. What I have to say about these topics will be as, I hope, an intelligent outside observer. But in this chapter, I’m able to adopt an internal perspective, since the primary subject matter is the industry where I worked for more than twenty years: the smartphone industry.

In June 1988, I started work in London at Psion PLC, the UK-based manufacturer of electronic organisers. I joined a small team working on the software for a new generation of mobile computers. In the years that followed, I spent countless long days, long nights and (often) long weekends architecting, planning, writing, integrating, debugging and testing Psion’s software platforms. In due course, Psion’s software would power more than a million PDAs in the “Series 3” family of devices. However, the term “PDA” was unknown in 1988; likewise for phrases like “smartphone”, “palmtop computer”, and “mobile communicator”. The acronym “PDA”, meaning “personal digital assistant”, was coined by Apple in 1992 in connection with their ambitious but flawed “Newton” project – long before anyone conceived of the name “iPhone”.

I first became familiar with the term “smartphone” in 1996, during early discussions with companies interested in using Psion’s “EPOC32” software system in non-PDA devices. After a faltering start, these discussions gathered pace. In June 1998, ten years after I had joined Psion, a group of Psion senior managers took part in the announcement of the formation of a new entity, Symbian Ltd, which had financial backing from the three main mobile phone manufacturers of the era – Ericsson, Motorola, and Nokia. Symbian would focus on the software needs of smartphones. The initial software, along with 150 employees led by a 5 man executive team, was contributed by Psion. In the years that followed, I held Symbian executive responsibility, at different times, for Technical Consulting, Partnering, and Research. In due course, sales of devices based on Symbian OS exceeded 250 million devices.

In June 2008 – ten more years later, to the day – another sweeping announcement was made. The source code of Symbian OS, along with that of the S60 UI framework and applications from Nokia, would become open source, and would be overseen by a new independent entity, the Symbian Foundation.

My views on the possibilities for radical improvements in technology as a whole are inevitably coloured by my helter-skelter experiences with Psion and Symbian. During these 20+ years of intense projects following close on each others’ heels, I saw at first hand, not only many issues with developing and productising technology, but also many issues in forecasting the development and productisation of technology.

For example, the initial June 1998 business plans for Symbian are noteworthy both for what we got right, and for what we got wrong.

3.1 Successes and shortcomings in predicting the future of smartphones

In June 1998, along with my colleagues on the founding team at Symbian, I strove to foresee how the market for smartphones would unfold in the years ahead. This forecast was important, as it would:

  • Guide our own investment decisions
  • Influence the investment decisions of our partner companies
  • Set the context for decisions by potential employees whether or not to join Symbian (and whether or not to remain with Symbian, once they had joined).

Many parts of our vision turned out correct:

  • There were big growths in interest in computers with increased mobility, and in mobile phones with increased computing capability.
  • Sales of Symbian-powered mobile devices would, by the end of the first decade of the next century, be measured in 100s of millions.
  • Our phrase, “Smartphones for all”, which initially struck many observers as ridiculous, became commonplace: interest in smartphones stopped being the preserve of a technologically sophisticated minority, and became a mainstream phenomenon.
  • Companies in numerous industries realised that they needed strong mobile offerings, to retain their relevance.
  • Rather than every company developing its own smartphone platform, there were big advantages for companies to collaborate in creating shared standard platforms.
  • The attraction of smartphones grew, depending on the availability of add-on applications that delivered functionality tailored to the needs of individual users.

Over the next decade, a range of new features became increasingly widespread on mobile phones, despite early scepticism:

  • Colour screens
  • Cameras – and video recorders
  • Messaging: SMS, simple email, rich email…
  • Web browsing: Google, Wikipedia, News…
  • Social networking: Facebook, Twitter, blogs…
  • Games – including multiplayer games
  • Maps and location-based services
  • Buying and selling (tickets, vouchers, cash).

By 2010, extraordinarily powerful mobile devices are in widespread use in almost every corner of the planet. An average bystander transported from 1998 to 2010 might well be astonished at the apparently near-magical capabilities of these ubiquitous devices.

On the other hand, many parts of our 1998 vision proved wrong.

First, we failed to foresee many of the companies that would be the most prominent in the smartphone industry by the end of the next decade. In 1998:

  • Apple seemed to be on a declining trajectory.
  • Google consisted of just a few people working in a garage. (Like Symbian, Google was founded in 1998.)
  • Samsung and LG were known to the Symbian team, but we decided not to include them on our initial list of priority sales targets, in view of their lowly sales figures.

Second, although our predictions of eventual sales figures for Symbian devices were broadly correct – namely 100s of millions – this was the result of two separate mistakes cancelling each other out:

  • We expected to have a higher share of the overall mobile phone market (over 50% – perhaps even approaching 100%).
  • We expected that overall phone market to remain at the level of 100s of millions per annum – we did not imagine it would become as large as a billion per year.

(A smaller-than-expected proportion of a larger-than-expected market worked out at around the same volume of sales.)

Third – and probably most significant for drawing wider lessons – we got the timescales significantly wrong. It took considerably longer than we expected for:

  • The first successful smartphones to become available
  • Next generation networks (supporting high-speed mobile data) to be widely deployed
  • Mobile applications to become widespread.

Associated with this, many pre-existing systems remained in place much longer than anticipated, despite our predictions that they would fail to be able to adapt to changing market demands:

  • RIM sold more and more BlackBerries, despite repeated concerns that their in-house software system would become antiquated.
  • The in-house software systems of major phone manufacturers, such as Nokia’s Series 40, likewise survived long past predicted “expiry” dates.

To examine what’s going on, it’s useful to look in more detail at three groups of factors:

  1. Factors accelerating growth in the smartphone market
  2. Factors restricting growth in the smartphone market
  3. Factors that can overcome the restrictions and enable faster growth.

Having reviewed these factors in the case of smartphone technology, I’ll then revisit the three groups of factors, with an eye to general technology.

3.2 Factors accelerating growth in the smartphone market

The first smartphone sales accelerator is decreasing price. Smartphones increase in popularity because of price reductions. As the devices become less expensive, more and more people can afford them. Other things being equal, a desirable piece of consumer electronics that has a lower cost will sell more.

The underlying cost of smartphones has been coming down for several reasons. Improvements in underlying silicon technology mean that manufacturers can pack more semiconductors on to the same bit of space for the same cost, creating more memory and more processing power. There are also various industry scale effects. Companies who work with a mobile platform over a period of time gain the benefit of “practice makes perfect”, learning how to manage the supply chain, select lower price components, and assemble and manufacture their devices at ever lower cost.

A second sales accelerator is increasing reliability. With some exceptions (that have tended to fall by the wayside), smartphones have become more and more reliable. They start faster, have longer battery life, and need fewer resets. As such, they appeal to ordinary people in terms of speed, performance, and robustness.

A third sales accelerator is increasing stylishness. In the early days of smartphones, people would often say, “These smartphones look quite interesting, but they are a bit too big and bulky for my liking: frankly, they look and feel like a brick.” Over time, smartphones became smaller, lighter, and more stylish. In both their hardware and their software, they became more attractive and more desirable.

A fourth sales accelerator is increasing word of mouth recommendations. The following sets of people have all learned, from their own experience, good reasons why consumers should buy smartphones:

  • Industry analysts – who write reports that end up influencing a much wider network of people
  • Marketing professionals – who create compelling advertisements that appear on film, print, and web
  • Retail assistants – who are able to highlight attractive functionality in devices, at point of sale
  • Friends and acquaintances – who can be seen using various mobile services and applications, and who frequently sing the praises of specific devices.

This extra word of mouth exists, of course, because of a fifth sales accelerator – the increasing number of useful and/or entertaining mobile services that are available. This includes built-in services as well as downloadable add-on services. More and more individuals learn that mobile services exist which address specific problems they experience. This includes convenient mobile access to banking services, navigation, social networking, TV broadcasts, niche areas of news, corporate databases, Internet knowledgebases, tailored educational material, health diagnostics, and much, much more.

A sixth sales accelerator is increasing ecosystem maturity. The ecosystem is the interconnected network of companies, organisations, and individuals who create and improve the various mobile services and enabling technology. It takes time for this ecosystem to form and to learn how to operate effectively. However, in due course, it forms a pool of resources that is much larger than exists just within the first few companies who developed and used the underlying mobile platform. These additional resources provide, not just a greater numerical quantity of mobile software, but a greater variety of different innovative ideas. Some ecosystem members focus on providing lower cost components, others on providing components with higher quality and improved reliability, and yet others on revolutionary new functionality. Others again provide training, documentation, tools, testing, and so on.

In summary, smartphones are at the heart of a powerful virtuous cycle. Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle. Applications and services which prove their value as add-ons for one generation of smartphones become bundled into the next generation. With this extra built-in functionality, the next generation is intrinsically more attractive, and typically is cheaper too. Developers see an even larger market and increase their efforts to supply software for this market.

3.3 Factors restricting growth in the smartphone market

Decreasing price. Increasing reliability. Increasing stylishness. Increasing word of mouth recommendations. Increasingly useful mobile services. Increasing ecosystem maturity. What could stand in the way of these powerful accelerators?

Plenty.

First, there are technical problems with unexpected difficulty. Some problems turn out to be much harder than initially imagined. For example, consider speech recognition, in which a computer can understand spoken input. When Psion planned the Series 5 family of PDAs in the mid 1990s (as successors to the Series 3 family), we had a strong desire to include speech recognition capabilities in the device. Three “dictaphone style” buttons were positioned in a small unit on the outside of the case, so that the device could be used even when the case (a clamshell) was shut. Over-optimistically, we saw speech recognition as a potential great counter to the pen input mechanisms that were receiving lots of press attention at the time, on competing devices like the Apple Newton and the Palm Pilot. We spoke to a number of potential suppliers of voice recognition software, who assured us that suitably high-performing recognition was “just around the corner”. The next versions of their software, expected imminently, would impress us with its accuracy, they said. Alas, we eventually reached the conclusion that the performance was far too unreliable and would remain so for the foreseeable future – even if we went the extra mile on cost, and included the kind of expensive internal microphone that the suppliers recommended. We feared that “normal users” – the target audience for Psion PDAs – would be perplexed by the all-too-frequent inaccuracies in voice recognition. So we took the decision to remove that functionality. In retrospect, it was a good decision. Even ten years later, voice recognition functionality on smartphones generally fell short of user expectations.

Speech recognition is just one example of a deeply hard technical problem, that turned out to take much longer than expected to make real progress. Others include:

  • Avoiding smartphone batteries being drained too quickly, from all the processing that takes place on the smartphone
  • Enabling rapid search of all the content on a device, regardless of the application used to create that content
  • Devising a set of application programming interfaces which have the right balance between power-of-use and ease-of-use, and between openness and security.

Second, there are “chicken-and-egg” coordination problems – sometimes also known as “the prisoner’s dilemma”. New applications and services in a networked marketplace often depend on related changes being coordinated at several different points in the value chain. Although the outcome would be good for everyone if all players kept on investing in making the required changes, these changes make less sense when viewed individually. For example, successful mobile phones required both networks and handsets. Successful smartphones required new data-enabled networks, new handsets, and new applications. And so on.

Above, I wrote about the potential for “a powerful virtuous cycle”:

Improved phones, enhanced networks, novel applications and services, increasingly savvy users, excited press coverage – all these factors drive yet more progress elsewhere in the cycle.

However, this only works once the various factors are all in place. A new ecosystem needs to be formed. This involves a considerable coordination problem: several different entities need to un-learn old customs, and adopt new ways of operating, appropriate to the new value chain. That can take a lot of time.

Worse – and this brings me to a third problem – many of the key players in a potential new ecosystem have conflicting business models. Perhaps the new ecosystem, once established, will operate with greater overall efficiency, delivering services to customers more reliably than before. However, wherever there are prospects of cost savings, there are companies who potentially lose out – companies who are benefiting from the present high prices. For example, network operators making healthy profits from standard voice services were (understandably) apprehensive about distractions or interference from low-profit data services running over their networks. They were also apprehensive about risks that applications running on their networks would:

  • Enable revenue bypass, with new services such as VoIP and email displacing, respectively, standard voice calls and text messaging
  • Saturate the network with spam
  • Cause unexpected usability problems on handsets, which the user would attribute to the network operator, entailing extra support costs for the operator.

The outcome of these risks of loss of revenue is that ecosystems might fail to form – or, having formed with a certain level of cooperation, might fail to attain deeper levels of cooperation. Vested interests get in the way of overall progress.

A fourth problem is platform fragmentation. The efforts of would-be innovators are spread across numerous different mobile platforms. Instead of a larger ecosystem all pulling in the same direction, the efforts are diffused, with the risk of confusing and misleading participants. Participants think they can re-apply skills and solutions from one mobile product in the context of another, but subtle and unexpected differences cause incompatibilities which can take a lot time to debug and identify. Instead of collaboration effectively turning 1+1 into 3, confusion turns 1+1 into 0.5.

A fifth problem is poor usability design. Even though a product is powerful, ordinary end users can’t work out how to operate it, or get the best experience from it. They feel alienated by it, and struggle to find their favourite functionality in amongst bewildering masses of layered menu options. A small minority of potential users, known as “technology enthusiasts”, are happy to use the product, despite these usability issues; but they are rare exceptions. As such, the product fails to “cross the chasm” (to use the language of Geoffrey Moore) to the mainstream majority of users.

The sixth problem underlies many of the previous ones: it’s the problem of accelerating complexity. Each individual chunk of new software adds value, but when they coalesce in large quantities, chaos can ensue:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window.
  • Smartphone application development may grow in difficulty, as developers need to juggle different programming interfaces and optimisation methods.
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.

3.4 Factors that can overcome the restrictions and enable faster growth

Technical problems with unexpected difficulty. Chicken-and-egg coordination problems. Conflicting business models. Platform fragmentation. Poor usability design. Accelerating complexity. These are all factors that restrict smartphone progress. Without solving these problems, the latent potential of smartphone technology goes unfulfilled. What can be done about them?

At one level, the answer is: look at the companies who are achieving success with smartphones, despite these problems, and copy what they’re doing right. That’s a good starting point, although it risks being led astray by instances where companies have had a good portion of luck on their side, in addition to progress that they merited through their own deliberate actions. (You can’t jump from the observation that company C1 took action A and subsequently achieved market success, to the conclusion that company C2 should also take action A.) It also risks being led astray by instances where companies are temporarily experiencing significant media adulation, but only as a prelude to an unravelling of their market position. (You can’t jump from the observation that company C3 is currently a media darling, to the conclusion that a continuation of what it is currently doing will achieve ongoing product success.) With these caveats in mind, here is the advice that I offer.

The most important factor to overcome these growth restrictions is expertise – expertise in both design and implementation:

  • Expertise in envisioning and designing products that capture end-user attention and which are enjoyable to use again and again
  • Expertise in implementing an entire end-to-end product solution.

The necessary expertise (both design and implementation) spans eight broad areas:

  1. technology – such as blazing fast performance, network interoperability, smart distribution of tasks across multiple processors, power management, power harvesting, and security
  2. ecosystem design – to solve the “chicken and egg” scenarios where multiple parts of a compound solution all need to be in place, before the full benefits can be realised
  3. business models – identifying new ways in which groups of companies can profit from adopting new technology
  4. community management – encouraging diverse practitioners to see themselves as part of a larger whole, so that they are keen to contribute
  5. user experience – to ensure that the resulting products will be willingly accepted and embraced by “normal people” (as opposed just to early adopter technology enthusiasts)
  6. agile project management – to avoid excess wasted investment in cases where project goals change part way through (as they inevitably do, due to the uncertain territory being navigated)
  7. lean thinking – including a bias towards practical simplicity, a profound distrust of unnecessary complexity, and a constant desire to identify and deal with bottleneck constraints
  8. system integration – the ability to pull everything together, in a way that honours the core product proposition, and which enables subsequent further evolution.

To be clear, I see these eight areas of expertise as important for all sectors of complex technology development – not just in the smartphone industry.

Expertise isn’t something that just exists in books. It manifests itself:

  • In individual people, whose knowledge spans different domains
  • In teams – where people can help and support each other, playing to everyone’s strengths
  • In tools and processes – which are the smart embodiment of previous generations of expertise, providing a good environment to work out the next generation of expertise.

In all three cases, the expertise needs to be actively nurtured and enhanced. Companies who under-estimate the extent of the expertise they need, or who try to get that expertise on the cheap – or who stifle that expertise under the constraints of mediocre management – are likely to miss out on the key opportunities provided by smartphone technology. (Just because it might appear that a company finds it easy to do various tasks, it does not follow that these tasks are intrinsically easy to carry out. True experts often make hard tasks look simple.)

But even with substantial expertise available and active, it remains essentially impossible to be sure about the timescales for major new product releases:

  • Novel technology problems can take an indeterminate amount of time to solve
  • Even if the underlying technology progresses quickly, the other factors required to create an end-to-end solution can fall foul of numerous unforeseen delays.

In case that sounds like a depressing conclusion, I’ll end this section with three brighter thoughts:

First, if predictability is particularly important for a project, you can increase your chances of your project hitting its schedule, by sticking to incremental evolutions of pre-existing solutions. That can take you a long way, even though you’ll reduce the chance of more dramatic breakthroughs.

Second, if you can afford it, you should consider running two projects in parallel – one that sticks to incremental evolution, and another that experiments with more disruptive technology. Then see how they both turn out.

Third, the relationship between “speed of technology progress” and “speed of product progress” is more complex than I’ve suggested. I’ve pointed out that the latter can lag the former, especially where there’s a shortage of expertise in fields such as ecosystem management and the creation of business models. However, sometimes the latter can move faster than the former. That occurs once the virtuous cycle is working well. In that case, the underlying technological progress might be exponential, whilst the productisation progress could become super-exponential.

3.5 Successes and shortcomings in predicting the future of technology

We all know that it’s a perilous task to predict the future of technology. The mere fact that a technology can be conceived is no guarantee that it will happen.

If I think back thirty-something years to my days as a teenager, I remember being excited to read heady forecasts about a near-future world featuring hypersonic jet airliners, nuclear fusion reactors, manned colonies on the Moon and Mars, extended human lifespans, control over the weather and climate, and widespread usage of environmentally friendly electric cars. These technology forecasts all turned out, in retrospect, to be embarrassing rather than visionary. Indeed, history is littered with curious and amusing examples of flawed predictions of the future. Popular science fiction fares no better:

  • The TV series “Lost in space”, which debuted in 1965, featured a manned spacecraft leaving Earth en route for a distant star, Alpha Centauri, on 16 October 1997.
  • Arthur C Clarke’s “2001: a space odyssey”, made in 1968, featured a manned spacecraft flight to Jupiter.
  • Philip K Dick’s novel “Do Androids Dream of Electric Sheep?”, coincidentally also first published in 1968, described a world set in 1992 in which androids (robots) are extremely hard to distinguish from humans. (Later editions of the novel changed the date to 2021 – the date adopted by the film Bladerunner which was based on the novel.)

Forecasts often go wrong when they spot a trend, and then extrapolate it. Projecting trends into the future is a dangerous game:

  • Skyscrapers rapidly increased in height in the early decades of the 20th century. But after the Empire State Building was completed in 1931, the rapid increases stopped.
  • Passenger aircraft rapidly increased in speed in the middle decades of the 20th century. But after Concorde, which flew its maiden flight in 1969, there have been no more increases.
  • Manned space exploration went at what might be called “rocket pace” from the jolt of Sputnik in 1957 up to the sets of footprints on the Moon in 1969-1972, but then came to an abrupt halt. At the time of writing, there are still no confirmed plans for a manned trip to Mars.

With the advantage of hindsight, it’s clear that many technology forecasts have over-emphasised technological possibility and under-estimated the complications of wider system effects. Just because something is technically possible, it does not mean it will happen, even though technology enthusiasts earnestly cheer it on. Just because a technology improved in the past, it does not mean there will be sufficient societal motivation to keep on improving it in the future. Technology is not enough. Especially for changes that are complex and demanding, up to six additional criteria need to be satisfied as well:

  1. The technological development has to satisfy a strong human need.
  2. The development has to be possible at a sufficiently attractive price to individual end users.
  3. The outcome of the development has to be sufficiently usable, that is, not requiring prolonged learning or disruptive changes in lifestyle.
  4. There must be a clear implementation path whereby the eventual version of the technology can be attained through a series of steps that are, individually, easier to achieve.
  5. When bottlenecks arise in the development process, sufficient amounts of fresh new thinking must be brought to bear on the central problems – that is, the development process must be open (to accept new ideas).
  6. Likewise, the development process must be commercially attractive, or provide some other strong incentive, to encourage the generation of new ideas, and, even more important, to encourage people to continue to search for ways to successfully execute their ideas; after all, execution is the greater part of innovation.

Interestingly, whereas past forecasts of the future have often over-estimated the development of technology as a whole, they have frequently under-estimated the progress of two trends: computer miniaturisation and mobile communications. For example, some time around 1997 I was watching a repeat of the 1960s “Thunderbirds” TV puppet show with my son. The show, about a family of brothers devoted to “international rescue” using high-tech machinery, was set around the turn of the century. The plot denouement of this particular episode was the shocking existence of a computer so small that it could (wait for it) be packed into a suitcase and transported around the world! As I watched the show, I took from my pocket my Psion Series 5 PDA and marvelled at it – a real-life example of a widely available computer more powerful yet more miniature than that foreseen in the programme.

As mentioned earlier, an important factor that can allow accelerating technological progress is the establishment of an operational virtuous cycle that provides positive feedback. Here are four more examples:

  1. The first computers were designed on paper and built by hand. Later computers benefited from computer-aided design and computer-aided manufacture. Even later computers benefit from even better computer-aided design and manufacture…
  2. Software creates and improves tools (including compilers, debuggers, profilers, high-level languages…) which in turn allows more complex software to be created more quickly – including more powerful tools…
  3. More powerful hardware enables new software which enables new use cases which demand more innovation in improving the hardware further…
  4. Technology reduces prices which allows better technology to be used more widely, resulting in more people improving the technology…

A well-functioning virtuous cycle makes it more likely that technological progress can continue. But the biggest factor determining whether a difficult piece of progress occurs is often the degree of society’s motivation towards that progress. Investment in ever-faster passenger airlines ceased, because people stopped perceiving that ever-faster airlines were that important. Manned flight to Mars was likewise deemed to be insufficiently important: that’s why it didn’t take place. The kinds of radical technological progress that I discuss in this book are, I believe, all feasible, provided sufficient public motivation is generated and displayed in support of that progress. This includes major enhancements in health, education, clean energy, artificial general intelligence, human autonomy, and human fulfilment. The powerful public motivation will cause society to prioritise developing and supporting the types of rich expertise that are needed to make this technological progress a reality.

3.6 Moore’s Law: A recap

When I started work at Psion, I was given a “green-screen” console terminal, connected to a PDP11 minicomputer running VAX VMS. That’s how I wrote my first pieces of software for Psion. A short while afterwards, we started using PCs. I remember that the first PC I used had a 20MB hard disk. I also remember being astonished to find that a colleague had a hard disk that was twice as large. What on earth does he do with all that disk space, I wondered. But before long, I had a new PC with a larger hard disk. And then, later, another new one. And so on, throughout my 20+ year career in Psion and Symbian. Each time a new PC arrived, I felt somewhat embarrassed at the apparent excess of computing power it provided – larger disk space, more RAM memory, faster CPU clock speed, etc. On leaving Symbian in October 2009, I bought a new laptop for myself, along with an external USB disk drive. That disk drive was two terabytes in size. For roughly the same amount of money (in real terms) that had purchased 20MB of disk memory in 1989, I could now buy a disk that was 100,000 times larger. That’s broadly equivalent to hard disks doubling in size every 15 months over that 20 year period.

This repeated doubling of performance, on a fairly regular schedule, is a hallmark of what is often called “Moore’s Law”, following a paper published in 1965 by Gordon Moore (subsequently one of the founders of Intel). It’s easy to find other examples of this exponential trend within the computing industry. University of London researcher Shane Legg has published a chart of the increasing power of the world’s fastest supercomputers, from 1960 to the present day, along with a plausible extension to 2020. This chart measures the “FLOPS” capability of each supercomputer – the number of floating point (maths) operations it can execute in a second. The values move all the way from kiloFLOPS through megaFLOPS, gigaFLOPS, teraFLOPS, and petaFLOPS, and point towards exaFLOPS by 2020. Over sixty years, the performance improves through twelve and a half orders of magnitude, which is more than 40 doublings. This time, the doubling period works out at around 17 months.

Radical futurist Ray Kurzweil often uses the following example:

When I was an MIT undergraduate in 1965, we all shared a computer that took up half a building and cost tens of millions of dollars. The computer in my pocket today [a smartphone] is a million times cheaper and a thousand times more powerful. That’s a billion-fold increase in the amount of computation per dollar since I was a student.

A billion-fold increase consists of 30 doublings – which, spread out over 44 years from 1965 to 2009, gives a doubling period of around 18 months. And to get the full picture of the progress, we should include one more observation alongside the million-fold price improvement and thousand-fold processing power improvement: the 2009 smartphone is about one hundred thousand times smaller than the 1965 mainframe.

These steady improvements in computer hardware, spread out over six decades so far, are remarkable, but they’re not the only example of this kind of long-term prodigious increase. Martin Cooper, who has a good claim to be considered the inventor of the mobile phone, has pointed out that the amount of information that can be transmitted over useful radio spectrum has roughly doubled every 30 months since 1897, when Guglielmo Marconi first patented the wireless telegraph:

The rate of improvement in use of the radio spectrum for personal communications has been essentially uniform for 104 years. Further, the cumulative improvement in the effectiveness of personal communications total spectrum utilization has been over a trillion times in the last 90 years, and a million times in the last 45 years

Smartphones have benefited mightily from both Moore’s Law and Cooper’s Law. Other industries can benefit in a similar way too, to the extent that their progress can be driven by semiconductor-powered information technology, rather than by older branches of technology. As I’ll review in later chapters, there are good reasons to believe that both medicine and energy are on the point of dramatic improvements along these lines. For example, the so-called Carlson curves (named after biologist Rob Carlson) track exponential decreases in the costs of both sequencing (reading) and synthesising (writing) base pairs of DNA. It cost about $10 to sequence a single base pair in 1990, but this had reduced to just 2 cents by 2003 (the date of the completion of the human genome project). That’s 9 doublings in just 13 years – making a doubling period of around 17 months.

Moore’s Law and Cooper’s Law are far from being mathematically exact. They should not be mistaken for laws of physics, akin to Newton’s Laws or Maxwell’s Laws. Instead, they are empirical observations, with lots of local deviations when progress temporarily goes either faster or slower than the overall average. Furthermore, scientists and researchers need to keep on investing lots of skill, across changing disciplines, to keep the progress occurring. The explanation given on the website of Martin Cooper’s company, ArrayComm, provides useful insight:

How was this improvement in the effectiveness of personal communication achieved? The technological approaches can be loosely categorized as:

  • Frequency division
  • Modulation techniques
  • Spatial division
  • Increase in magnitude of the usable radio frequency spectrum.

How much of the improvement can be attributed to each of these categories? Of the million times improvement in the last 45 years, roughly 25 times were the result of being able to use more spectrum, 5 times can be attributed to the ability to divide the radio spectrum into narrower slices — frequency division. Modulation techniques like FM, SSB, time division multiplexing, and various approaches to spread spectrum can take credit for another 5 times or so. The remaining sixteen hundred times improvement was the result of confining the area used for individual conversations to smaller and smaller areas — what we call spectrum re-use…

Cooper suggests that his law can continue to hold until around 2050. Experts at Intel say they can foresee techniques to maintain Moore’s Law for at least another ten years – potentially longer. In assessing the wider implications of these laws, we need to consider three questions:

  1. How much technical runway is left in these laws?
  2. Can the benefits of these laws in principle be applied to transform other industries?
  3. Will wider system effects – as discussed earlier in this chapter – frustrate overall progress in these industries (despite the technical possibilities), or will they in due course even accelerate the underlying technical progress?

My answers to these questions:

  1. Plenty
  2. Definitely
  3. It depends on whether we can educate, motivate, and organise a sufficient critical mass of concerned citizens. The race is on!

>> Next chapter >>

14 May 2010

Chapter finished: Forces for positive change

Filed under: H+ Agenda — David Wood @ 1:26 am

You have to give up to go up

These words from John C Maxwell often come to my mind.  Recently, I’ve been trying to reduce various activities, in order so that I can dedicate more time to an activity I truly want to progress:

  • I’m trying to spend less time reading (for example, reading books)
  • I’m also trying to write fewer blogposts here
  • Instead, I’m prioritising writing material for my book “The Humanity+ Agenda”.

So, I have to apologise for lack of “normal service” in this blog.

However, I’ve now managed to complete a draft of chapter two of my book.  That’s good news.  You’ll find a snapshot of the current contents below.  At the same time, I’ve taken the decision that I ought to add one more chapter into the contents (it will be chapter 3, “My personal journey”).  So in a way I’m at the same situation as before: I still need to write 8 chapters.  (But I can count the draft as 2/10 finished, whereas before I was only 1/9 finished.)

The chapter I’ve just finished drafting, “Forces for positive change”, is meant to be self-explanatory, so I won’t say anything more about it here now.

I’ve also been making some changes to the first chapter (based, in part, on suggestions I’ve received from reviewers).  As I mentioned before, I’m keeping the latest drafts of all the chapters in the “Pages” section of this blog – accessible from the box on the right hand side.

I’ll be grateful for feedback.  I may not act on that feedback immediately, but I’ll get round to it in due course!

========

2. Forces for positive change

<Snapshot of material whose master copy is kept here>

<< Previous chapter <<

How do people respond to mentions of possible global crises?

In my experience, people often find that kind of discussion awkward and embarrassing.  They make a joke, or cough nervously, and try to change to a different topic.

One rationale for avoiding talking about an issue is if there’s nothing that can be done about that issue.  After all, what’s the point of discussing a problem if you can’t change the outcome?  There’s no merit in becoming unnecessarily agitated.  Better to focus on matters where you can change the outcome.  It’s as stated in the “acceptance clause” of the “serenity prayer” saying of Reinhold Niebuhr:

God grant me
The serenity to accept the things I cannot change…

That doesn’t apply in the case of the crises listed in the previous chapter.  For example, there are plenty of steps that people can take to reduce the risk of environmental catastrophe.  But these steps seem hard.  And this triggers a second, more complex, rationalisation:

  • If the only responses to a perceived crisis are hard, it’s preferable to hope that the crisis isn’t real, or will go away of its own accord.
  • Alternatively, if the perceived crisis turns out to be real after all, it’s preferable to hope that it won’t have any impact in the foreseeable future.
  • In any case, it’s preferable to leave it to other people to worry about the crisis.

Here’s the sense in which this rationalisation is “preferable”: it’s psychologically easier.  It allows people to go on living their lives as normal, concentrating on matters of work and play, family and friends, sport and culture.  They deny that there’s anything significant they personally should be doing about the looming crisis.  Therefore, they’re able to focus without distraction on other matters that are important to them.

This “denialist” approach can gather rational-sounding arguments in its defence.  Part of the psychological comfort blanket is the observation that “we’ve been getting along fine, without things going badly wrong in the past, thank you – despite the warnings of previous doom-mongers”.  One antidote to this is to highlight past occasions when things did go badly wrong, despite protestations of optimism from people who had become overly accustomed to a lengthy period of apparent calm and progress.  The outbreak of World War One is a stark example.  As noted by journalist Hamish McRae:

The 19th century globalisation ended with the catastrophe of the First World War. It is really scary to realise how unaware people were of the fragility of those times. In 1910, the British journalist Norman Angell published The Great Illusion, in which he argued that war between the great powers had become an economic impossibility because of “the delicate interdependence of international finance”.

In spring 1914 an international commission reported on the Balkan Wars of 1912-13. The British member of the commission, Henry Noel Brailsford, wrote: “In Europe the epoch of conquest is over and save in the Balkans perhaps on the fringes of the Austrian and Russian empires, it is as certain as anything in politics that the frontiers of our national states are finally drawn. My own belief is that there will be no more war among the six powers.”

A different kind of response to a potential pending crisis is for people to latch on, firmly, to one apparent solution to that crisis.  You know the kind of thing:

  • To prevent the risk of runaway global warming, we must, above all, reduce carbon emissions.
  • To prevent the risk of economic destabilization, we must, above all, constrain the greediness of bankers.
  • To prevent the risk of terrorists detonating weapons of mass destruction, we must, above all, increase surveillance of potential trouble-makers.

This is in line with the “action clause” of Niebuhr’s saying:

God grant me
The serenity to accept the things I cannot change
The courage to change the things that I can

Yes, if a diagnosis is correct, it can be a good thing to focus single-mindedly on what needs to be done to fix matters.  But if the diagnosis is incomplete, or flawed in other ways, this kind of single-track solution-thinking can obstruct a fuller discussion and even make matters worse.  The passion of misguided courage or premature activism can  pose just as many problems as does a denialist desire to damp down the conversation altogether.  I’m a big fan of passion, but passion without wisdom can often be part of the problem, rather than part of the solution.  That’s why the final clause of Niebuhr’s saying is the most important one:

God grant me
The serenity to accept the things I cannot change
The courage to change the things that I can
And the wisdom to know the difference.

In a later chapter, I’ll review what I see as “Six dangerous temptations” which are, each in their own way, mistaken single-track approaches to the impending crises of the 2010s.  But before that, it’s time for me to describe the approach that I view as much more promising.

In some ways, the approach I’ll outline is single-minded too.  But please bear with me while I spell out the whole story.

2.1 Above all, technology

The single biggest force for positive change is technology.

Some examples:

  • Mechanisation frees labourers from the drudgery of tedious physical exertion.
  • New forms of transport allow people to travel and explore, further and faster.
  • Modern buildings provide unparalleled safety, shelter and amenities.
  • Medicine intervenes to cure miserable diseases and prevent early death.
  • Improved agriculture brings forth food to eliminate famine and nurture health.
  • Information technology keeps people informed of key developments.

In prehistory, it was first fire, then agriculture, then the wheel, that set humankind on the road to civilisation.  In more recent times, the printing press enabled both the Reformation and the Enlightenment.  Reliable clocks allowed accurate tracking of the positions of ships at sea, diminishing the risk of shipwrecks.  The industrial revolution triggered a series of changes that enabled unprecedented degrees and variety of leisure time.  The Internet, coupled with the spread of both computing and mobile communications, transforms virtually every area of life in almost every corner of the globe, regardless of the type of government.

How could technology address the crises listed in the previous chapter?  In principle, the answers are as follows.

  1. There’s more than enough energy reaching the earth from the sun to answer every human energy need, allowing the discontinuation of CO2-emitting fossil fuels.  All that’s needed is the improvements in technology to harvest that solar energy, store it, and transport it to where it’s needed.  Alongside solar energy, alternative energy technologies could play a role too – such as next generation nuclear fission reactors (to name just one of many possibilities).  Moreover, improved technology can be used to generate and transport ample fresh water from sea water, and to cleanly manufacture quantities of whatever resources are needed.
  2. Improved software systems could monitor and regulate the flow of financial resources around the world, to prevent economic destabilization or financial collapse.
  3. Rapidly improving living standards, enabled by smart use of technology, visibly accessible to everyone on a fair basis, will take away much of the incentive people feel towards revolution or terrorism.  Improved detection systems – akin to improved anti-virus systems in the software world, that limit the potential of damage from virus writers – will also play a part.
  4. The same sorts of improvement will allow more and more people to experience higher peaks of human fulfillment.

But these answers are unpopular.  Here are some key objections to what I’ve just proposed:

  • Technological progress is often slow and uncertain.  For example, progress with solar energy or safe nuclear fission has long been predicted, but often has been delayed.  We can’t afford the risk of waiting for technological improvements; we need to adopt other kinds of solution instead.
  • In any case, technology doesn’t address the underlying causes of human problems, such as faulty human nature, or dysfunctional social structures.
  • Worse, technology brings problems as well as solutions.  (Refer back to section 1.5, “The existential crisis of accelerating change and deepening complexity”.)

In turn, here are my responses to these objections:

  • We need to invest substantially more in technological development – but do it cleverly.  Happily, the rate of technological progress has itself been accelerating.
  • I disagree that technology only addresses “external” aspects of human life.  Technology, wisely applied, can play a big role in improving both human nature and human social structure.
  • To ensure that we obtain positive results from technological progress, rather than negative ones, we need improved monitoring and management of the development and deployment of technology.

2.2 Above all, education

My deeper answer is that the set of solutions I’m advocating is pro-human even more than it is pro-technology.  Technology provides the means, but it’s not an end in itself.

I’m not interested in technology for the “better gadgets” it can provide us.  I’m interested in technology for the “better humans” it can help us to become.

That brings me to the topic of education: life-long learning, in which we continually improve all aspects of our knowledge and intelligence – including our social intelligence and emotional intelligence as well as our analytic intelligence.

There’s a crtically important two-way relationship between technology and education.

In one direction: clearer, smarter thinking, freed by good education from prejudice and misinformation, allows us to make better decisions about improving technology.  It gives us the “wisdom to know the difference”.

In the other direction: improved technology provides vital tools to assist our thinking and allow us to learn more quickly.  These tools include:

  • Widely accessible, easily searchable electronic libraries of the best thinking of the entire planet
  • Calculation engines that can swiftly compute the range of possible results of complex interactions
  • Healthy food, dietary supplements, medication, and stimulants, that allow people to concentrate more fully, while acquiring or using knowledge
  • Targeted “electronic learning” devices, increasingly hosted within rich virtual emulation environments, that allow individuals to quickly and enjoyably acquire specific knowledge and skills
  • Smart “personal digital assistants” that help debug our thinking and guide us through complex tasks
  • An online “cloud” of software services, containing both human and artificial intelligence, that can augment our own mental processes.

The “Internet of Computers” is in the process of transformation to an “Internet of Things” which connects literally trillions of data sensors around the planet.  The result, in principle, is up-to-the-second information about every parameter of possible interest: crops, soil, water purity, atmospheric composition, temperature, underground vibrations…

The output of education is an improvement in individual human minds and improvement in the global human understanding.  Yet education faces some steep challenges in the 2010s:

  • Merely the ability to think faster does not mean that we think better.
  • If we are victim to fundamental biases, we’ll use our greater intelligence to find clever justifications for continuing our biases, rather than to see more clearly.
  • If a society is victim to outmoded but persistent misunderstanding, the same “bias preservation” dynamics operate at a larger level.
  • Various vested interests are better organised than ever before – and have a battery of mind-turning tools at their disposal.

Psychologists have identified and catalogued large numbers of widespread but flawed modes of thinking – with names such as “confirmation bias”, “sunk cost fallacy”, “illusory correlation”, and “conjunction fallacy”.  Alas, merely knowing about these flaws does not mean we are personally immune to them.

However, a credible positive message about the future can give people the courage to combat the cognitive shackles of intellectually repressive worldviews.  Educating and exciting people about the radical transformative capabilities of emerging new technologies – including clean energy, robotics, nanotechnology, synthetic biology, artificial general intelligence, and human rejuvenation engineering – can disrupt powerful entrenched cultural biases, such as those derived from fundamentalist religions or ideologies.  Humanity+, as I see it, isn’t just an appeal for better technology; it’s an appeal for a better way of thinking about technology.  This new thinking highlights the potential for improved technology, wisely deployed, to provide the kinds of solution that were at one time the sole province of religion:

  • Mental tranquility and exhilaration
  • Harmony between peoples and between nations
  • An abundance of resources, without scarcity
  • Lifespans that can in principle be extended indefinitely, containing ever greater variety and fulfillment.

To be clear, this education must clarify the perils as well as the promise of new technology, to allow us to choose wisely the courses of action that will fulfil the promise and avoid the perils.  That choice will be far from easy.  We’ll need all the help we can get.  We’ll need great strength, as well as great wisdom. Some of that strength will come from improved social structures.  And some will come from healthier, more vibrant individuals.

2.3 Above all, health

Education and health are like two sides of the same coin:

  • Education improves the mind, making us wiser (in all the many dimensions of the word “wise”)
  • Health improves the body, making us stronger (in all the many dimensions of the word “strong”).

Just as there’s a two-way relationship between technology and education, there’s a two-way relationship between technology and health.

In one direction: individuals who are healthier are able to work harder, are less prone to mistakes from tiredness or other psychological vulnerabilities, and contribute more to the growth of high quality technology.

In the other direction: improved technology provides numerous tools to improve our health:

  • To repair broken limbs or joints
  • To improve our immune systems
  • To address all kinds of diseases
  • To keep us fitter and more resilient, longer into our lifespans
  • To screen, proactively, for early warning signs of impending bodily failures.

Miniaturisation of medical tools has profound impacts on the effectiveness of many surgical processes.  The evolution from “smart phones” to “smart things” is increasingly extending to “smart cells”.  Techniques from industrial manufacturing and software programming are being re-applied at the level of items that can easily pass through our bloodstreams and other internal fluid systems.  Rather than “programming silicon” we can look forward to increasing applications of “programming carbon”.

Technology not only has the capability of restoring us to health.  It can take us beyond normal levels of health, to a state where we are “better than well”.  It can do this, first, by making changes outside our bodies – providing us with machinery to magnify our strength, telescopes and microscopes to magnify our vision, loudspeakers to magnify our voice, and so on.  But it can do this, second, by making changes inside our bodies.  These changes are more fundamental and, therefore, tend to raise more apprehension.  Many people feel these changes are “unnatural” and, therefore, should be opposed.  But arguments about things being unnatural hold little weight for me.

There’s a saying attributed to the father of Orville and Wilbur Wright:

If God wanted man to fly, he would have made us with wings.

But who among us has refused to enter an airplane on the grounds that it would be “unnatural”?  Indeed, who among us has refused to augment the natural protective and decorative aspects of our skin by covering much of that skin with clothing (something else that is “unnatural”)?  Injecting young babies with vaccines is, again, far from natural.  And what about cosmetic surgery?  Not so many years ago, we might have said “yuk” at the prospect of someone having cosmetic surgery.  Nowadays, it’s no big deal.

Imagine life in 20 or 30 years time, when people might routinely undergo hi-tech medical treatment that repairs, not just aspects of their skin and external appearance, but also lots of internal bodily damage – damage that you and I would describe as “aging”.  Imagine that, as a result of such treatment, someone’s life expectancy is increased by 10 years, and imagine that the treatment can be repeated on a regular basis.  Imagine that this opens the possibility for people routinely living far beyond the current maximum age of 120.  Should we object to such treatment on the grounds that it is “unnatural”?  My own expectation is that we will, very quickly, become accustomed to such treatment, and we’ll no longer bat an eyelid at it.

It will be like test tube babies.  The birth of the very first test tube baby, in 1978, was accompanied by a fervour of hand-wringing and amazement.  Nowadays, most people take the whole process for granted (whilst recognising that it’s made possible by very clever technology).  Likewise, as treatments become available that make our bodies stronger and fitter, and which repair the cellular and inter-cellular damage known as aging, there will – to start with – be a huge press hullabaloo.  But the hullabaloo will subside.

There are important questions over the desirability of people being “better than well” and having indefinitely long lifespans, and I’ll return to these in later chapters.  However, as you can see, I have little respect for attempts to reject these treatments just because they are somehow “unnatural”.

As more people come to understand the potential for greatly enhanced health within the lifetimes of many people living today, there will be a profound shift in attitude.  The shift is well captured in the amusing but profound fable written by Nick Bostrom, “The Fable of the Dragon-Tyrant“.  I expect that, in the next ten years, larger numbers of people will switch from what might be called a “pro-death” accommodationist viewpoint, which tolerates the fact of human aging and human death (albeit half-heartedly complaining about it), to an aggressive “pro-life” stance, that urgently seeks to accelerate research into the technology and treatment that can slow, then reverse the effects of human aging.  This pro-life stance will vigorously campaign for a much larger portion of national budgets to be allocated to research and development of pro-life technologies.  Instead of spending lots of time learning about, for example, sports team statistics, or the habits of the latest pop stars and movie actors, people everywhere will be exchanging information and ideas about pro-life technologies and treatments.

Is there a risk that a rush of interest in pro-life technologies will be at the cost of interest in other important technologies, such as those for clean energy?  Perhaps.  But probably not:

  • People who think they’re going to live longer will, other things being equal, become more interested in longer-term planetary well-being, rather than leaving that topic as something for subsequent generations to handle.
  • Technologies are frequently inter-dependent.  There’s been a very useful “convergence” of cutting edge ideas from software, nanotechnology, synthetic biology, and artificial intelligence, among others.  Many of the techniques that accelerate pro-life technologies will, at the same time, accelerate clean energy technologies.
  • Someone who becomes knowledgeable about science and technology due to an interest in life extension will tend also to discover lots of fascinating wider applications of science and technology.

This is where a renewed education programme, as briefly mentioned earlier, should have a big effect.  The result should be a critical mass of informed citizens who have a shared broad understanding of the power and potential of science and technology to provide deep solutions to the major problems facing human society.  Leaders of society will be unable to ignore this critical mass.

2.4 Above all, society

Recall one of the objections I mentioned earlier:

  • Technological progress is often slow and uncertain.  For example, progress with solar energy or safe nuclear fission has long been predicted, but often has been delayed.  We can’t afford the risk of waiting for technological improvements; we need to adopt other kinds of solution instead.

This objection can be re-stated, forcibly, against the specific pro-technology picture I’m painting.  The objection runs as follows:

  • Technological progress is often slow and uncertain.  The idea of technology making people significantly smarter (via improved education) and significantly healthier (via improved medicine) is fair enough in the long term, but don’t expect any big changes in the next 10-20 years.  We should be focusing, instead, on other kinds of solution to the problems of the early 21st century.  For example, we should be seeking changes in politics, and/or in the values that motivate human lifestyles.

As it happens, I agree with much of this objection:

  • I agree that technological progress is often slow and uncertain.  However, we can and should take steps to make it faster, and less uncertain.
  • I agree that, in parallel with a focus on improved technology, we must also address the organisation of society, via changes that only politicians can authorise.
  • Again, I agree that we must also address the question of the values that motivate human lifestyles.

In the section after this one, I’ll revisit the topic of motivational values, under the heading “Above all, humanity”.  For the moment, let’s briefly look at the question of the relationship between technology and society.

By now, you won’t be surprised if I say that there’s a two-way relationship between society and technology.  In one direction, smart changes in legislation and social structure can have a big impact on the speed and effectiveness of research, development, and deployment of new technologies.  And in the other direction, wise use of new technologies – such as the Internet – can boost the effectiveness of social structures.

Technology is developed by people and companies who are driven by various motivations, and constrained by various fears.  The motivations cover both economic desires and non-economic desires – including the “reputation economy” and the “gift economy”.  Changing the incentive structures can have a significant impact on the work performed.  However, the impact is sometimes different from what prevailing wisdom would suggest.  For example, over-emphasis of financial rewards can, in some cases, diminish the potential for people to uncover creative solutions, or to work together collaboratively.

When it comes to governing the effectiveness of technology development, the negative constraints can be even more important than the positive incentives.  If a company fears bad outcomes from the result of some research, they’ll be less likely to undertake that research.  These bad outcomes can include:

  • Being sued for huge amounts of money for patent infringement by someone who, it seems, has thought of a broadly similar idea
  • Undermining an existing profitable product line by the same company.

The system of patents grew up for good reasons, but operates at the present time in a way that frequently hinders innovation and collaboration, rather than rewards it.  This system is overdue significant reform.  Equally pressing is the need for governments to ensure the continuing vitality of their economies, striking a good balance between the requirement for competition and the requirement for collaboration.  There also need to be checks on the ability of sophisticated, well-funded lobbyists to gain undue influence over the decisions of law-makers: democracy is far from healthy when vested interests have so much power.  Finally, societies need to be constantly alert against the twin risks of under-regulation and over-regulation, in numerous areas of potential innovation, including new drugs, new mobile computing devices, new “cloud” services, new financial services, and so on.

None of this “social engineering” is easy, but it all makes a big difference to the likelihood of rapid progress with the development of key technology.  With bad social structures, we get “the madness of the crowd”.  With good social structures, we get “the wisdom of the crowd”.

2.5 Above all, humanity

So far, I’ve described four key themes, as priorities for the coming decade: technology, education, health, and society.  There’s one more to add, which sits at the top of the whole structure: humanity.  The diagram provides a reminder of the numerous two-way interconnections between these five key themes.

The whole point of all the effort on technology, all the long hours spent on education and learning, all the labour to improve our health, and all our social engineering, is to enhance human experience, in ways that are fully sustainable, open-ended, and equitable.

You may ask: what kind of human experience? My answer is: we can presently only begin to glimpse the possibilities.

Some hints are provided by our present-day peak experiences – from music, dance, sport, games, puzzles, theatre, reading, mathematics, discovery, exploration, meditation, friendship, family, community, gardening, pets, safari, food, drink…

You may ask: won’t this become boring? My answer is: why should it? There’s a whole universe in physical space for us to explore.  And there are countless exotic universes in virtual space for us to create and explore.  There will be numerous interesting people to get to know – not to mention huge numbers of fascinating artificial intelligences.

Humanity sits at the top of this structure, not only as the end goal, but as a way to decide, in principle, the value of activity throughout society.  At present, countries tend to measure their worth via their GNP – Gross National Product – or some related economic statistic.  Leaders are happy when their GNP increases, and perturbed when it falls.  But it is widely recognised that GNP is a sorely inadequate measure.  Stirring words from a fine March 1968 speech by US senator Robert Kennedy are worth quoting at some length:

Even if we act to erase material poverty, there is another greater task, it is to confront the poverty of satisfaction – purpose and dignity – that afflicts us all.  Too much and for too long, we seemed to have surrendered personal excellence and community values in the mere accumulation of material things.  Our Gross National Product, now, is over $800 billion dollars a year, but that Gross National Product – if we judge the United States of America by that – that Gross National Product counts air pollution and cigarette advertising, and ambulances to clear our highways of carnage.  It counts special locks for our doors and the jails for the people who break them.  It counts the destruction of the redwood and the loss of our natural wonder in chaotic sprawl.  It counts napalm and counts nuclear warheads and armored cars for the police to fight the riots in our cities.  It counts Whitman’s rifle and Speck’s knife, and the television programs which glorify violence in order to sell toys to our children.  Yet the gross national product does not allow for the health of our children, the quality of their education or the joy of their play.  It does not include the beauty of our poetry or the strength of our marriages, the intelligence of our public debate or the integrity of our public officials.  It measures neither our wit nor our courage, neither our wisdom nor our learning, neither our compassion nor our devotion to our country, it measures everything in short, except that which makes life worthwhile.

However, so long as (to quote a saying attributed to Milton Friedman) “the business of business is business”, it’s hard to expect companies to set aside the quest for economic profits in favour of these broader human values.  Shareholders will act in concert to fire management teams who fail to exploit profit opportunities.  The remedy is as already stated: a critical mass of informed citizens who have a shared broad understanding of the power and potential of science and technology to provide deep solutions to the major problems facing human society.  This mass of informed citizens can ensure that companies act in service of goals other than mere monetary reward.

Alas, everything I’ve spoken about in this chapter appears to come with a large price tag.  Education and health already consume major portions of national budgets.  Science research budgets are under great pressure, too.  In a time of austerity, it’s likely that expenditure on all these areas will fall, rather than rise.  Yet I have been arguing for an increase, in each of these areas.  In later chapters of this book, I’ll expand these five priority areas into 20 specific research projects.  I’ll make the case that these 20 projects should become priorities for all of us – as individuals, organisations, institutions, universities, industry, governments, and media.  They all deserve a larger amount of attention, analysis, resourcing, and funding.  Given our current severe economic constraints, that may seem a fanciful hope.

But there is good reason for this hope.  We have in our favour the fact that improvements in any one of these areas feeds into improvements elsewhere.  As our technology improves, our education improves, and so on.  Rather than simply focusing on spending more money on each of these areas, we can focus on spending money more smartly on each area.  Raising our game – in ways that I’ll explain – means that instead of “Education”, we can talk of “Education+”.  Instead of “Health”, we can speak of “Health+”.  Instead of “Technology”, we can speak of “Technology+”.  Instead of “Society”, we can speak of “Society+”.  And – you guessed it – instead of “Humanity”, we can speak of “Humanity+”.

At this stage, you will probably still have a large question in your mind: can we really expect the kinds of technological progress that I’ve been indicating as possible for the next 10-20 years? I’ll now seek to address that question in two different ways:

  • In the next chapter, I’ll describe my own personal history in the technology industry, highlighting lessons about both rapid and slow rates of progress.
  • After that, I’ll devote a chapter to each of the five key themes, highlighting each time actual progress that’s happening.

>> Next chapter >>

9 May 2010

Chapter completed: Crises and opportunities

Filed under: alienation, change, climate change, Economics, H+ Agenda, recession, risks, terrorism — David Wood @ 12:16 am

I’ve taken the plunge.  I’ve started writing another book, and I’ve finished the first complete draft of the first chapter.

The title I have in mind for the book is:

The Humanity+ Agenda: the vital priorities for the coming decade

The book is an extended version of the 10 minute opening presentation I gave a couple of weeks ago, at the Humanity+ UK 2010 event.  My reasons for writing this book are spelt out here.  The book will re-use and refine a lot of the material I’ve tried out from time to time in earlier posts on this blog, so you may find parts of it familiar.

I’ve had a few false starts, but I’m now happy with both the framework for the book (9 chapters in all) and a planned editing/review process.

Chapter 1 is called “Crises and opportunities”.  There’s a copy of the current draft below.

I’ll keep the latest drafts of all the chapters in the “Pages” section of this blog – accessible from the box on the right hand side.  From time to time – as in this posting – I’ll copy snapshots of the latest material into regular blogposts.

It’s my hope that the book will benefit from feedback and suggestions from readers.  Comments can be made, either to regular blogposts, or to the “pages”.  I’m also open to receiving emailed comments or contributions.  Unless someone tells me otherwise, I’ll assume that anything posted in response is intended as a potential contribution to the book.

(I’ll acknowledge, in the acknowledgements section of the book, all contributions that I use.)

========

1. Crises and opportunities

<Snapshot of material whose master copy is kept here>

The decade 2010-2019 will be a decade of crises for humanity:

  • As hundreds of millions of people worldwide significantly change their lifestyles, consuming ever more energy and generating ever more waste, the planet Earth faces increasingly great strains. “More of the same” is not an acceptable response.
  • Alongside the risk of environmental disaster, another risks looms: that of economic meltdown. The massive shocks to the global finance system at the end of the previous decade bear witness to powerful underlying tensions and problems with the operation of market economies.
  • The rapid rate of change causes widespread personal frustration and societal angst, driving a significant minority of people into the arms of beguiling ideologies such as fundamentalist Islam and the militant pursuit of terrorism. Relatively easy access to potential weapons of mass destruction – whether nuclear, biological, or chemical – transforms the threat of terrorism from an issue of national security into an issue of global survival.

In aggregation, these threats are truly fearsome.

To improve humanity’s chances of surviving, in good shape, to 2020 and beyond, we need new solutions.

I believe that these new solutions are emerging in part from improved technology, and in part from an important change in attitude towards technology. This book explains the basis for these beliefs.  This chapter summarises the crises, and the remaining chapters summarise the proposed solutions.

In the phrase “Humanity+”, the plus sign after the word “Humanity” emphasises that solutions to our present situation cannot be achieved by people continuing to do the same as before. Instead, a credible vision of wise application of new technologies can bring humans – both individually and collectively – to operate in dramatically enhanced ways:

  • Humans will be able, in stages, to break further free from the crippling constraints and debilitations of our evolutionary background and our historical experiences;
  • We will, individually and collectively, become smarter, wiser, stronger, kinder, healthier, calmer, brighter, more peaceful, and more fulfilled;
  • Instead of fruitless divisions and conflicts, we’ll find much better ways to cooperate, and build social systems for mutual benefit.

This is the vision of humanity fulfilling its true potential.

But there are many obstacles on the path to this fulfilment.  These obstacles could easily drive Humanity to “Humanity-” (humanity minus), or even worse (human annihilation), rather than Humanity+.  There’s nothing inevitable about the outcome.  As a reminder of the scale of the obstacles, let’s briefly review five interrelated pending crises.

1.1 The environmental crisis

Potential shortages of clean drinking water.  Rapid reductions in the available stocks of buried energy sources, such as coal, gas, and oil.  Crippling impacts on our environment from the waste products of our lifestyles.  These – and more – represent the oncoming environmental crisis.

With good reason, the aspect of the environmental crisis that is most widely discussed is the potential threat of runaway climate change.  Our accelerating usage of fossil fuels means that carbon dioxide (CO2) in the atmosphere has reached levels unprecedented in human history.  This magnifies the greenhouse effect of the atmosphere, tending to push the average global temperature higher.  This relationship is complex.  Forget simple ideas about increases in factor A invariably being the cause of increases in factor B.  Think instead about a dance of different factors that each influence the other, in different ways at different times.  (That’s a theme that you’ll notice throughout this book.)

In the case of climate change, the players in the dance include:

  • Variation in the amount of sunlight striking earth landmasses, due to changes over geological timescales in the axis of the earth, the eccentricity of the earth’s orbit, and the distribution of landmass over different latitudes;
  • Variation in the slow-paced transfer of heat between different parts of the ocean;
  • Variation in the speed of build-up or collapse of huge polar ice sheets;
  • Variation in numerous items in the atmosphere, including aerosols (which tend to lower average temperature) and greenhouse gases (which tend to raise it again);
  • Variation in the amounts of greenhouse gases, such as methane, being suddenly released into the atmosphere from buried frozen stores (for example, from tundra);
  • Variation in the sensitivity of the planet to the various “climate forcing agents” – sometimes a small change in one will lead to just small changes in the climate, but at other times the consequences are more severe.

What makes this dance potentially deadly is the twin risk of latent momentum and strong positive feedback:

  • More CO2 in the atmosphere raises the average temperature, which means there’s more H2O (water vapour) in the atmosphere too, raising the average temperature yet further;
  • Icesheets over the Antarctic and Greenland take a long time to start to disintegrate, but once the process gets under way, it can become essentially irreversible;
  • Less ice on the planet means less incoming sunlight is reflected to space; instead, larger areas of water absorb more of the sunlight, increasing ocean temperature further;
  • Rises in sea temperatures can trigger the sudden release of huge amounts of greenhouse gases from methane clathrate compounds buried in seabeds and permafrost – another example of rapid positive feedback.

Indeed, there is significant evidence that runaway methane clathrate breakdown may have caused drastic alteration of the ocean environment and the atmosphere of earth a number of times in the past, most notably in connection with the Permian extinction event, when 96% of all marine species became extinct about 250 million years ago.

Of course, predicting the future of the environment is hard.  There are three sorts of fogs of climate change uncertainty:

  1. Many of the technical interactions are still unknown, or are far from being fully understood.  We are continuing to learn more;
  2. Even where we believe we do understand the technical interactions, many of the detailed interactions are unpredictable.  Just as it’s hard to predict the weather itself, one month (say) into the future, it’s hard to predict the exact effect of ongoing climate forcing agents.  The effect that “a butterfly flapping its wings unpredictably causes a hurricane on the other side of the planet” applies for the chaos of climate as much as for the chaos of weather;
  3. There are huge numbers of vested interests, who (consciously or sub-consciously) twist and distort aspects of the argument over climate change.

The vested interests include:

  • Both anti-nuclear and pro-nuclear campaigners;
  • Both anti-oil and pro-oil campaigners, and anti-coal and pro-coal campaigners;
  • Both “small is beautiful” and “big is beautiful” campaigners;
  • Both “back to nature” and “pro-technology” campaigners;
  • Scientists and authors who have long supported particular theories, and who are loath to change their viewpoints;
  • Hardened political campaigners who look to extract maximum concessions, for the region or country they represent, before agreeing a point of negotiation.

Not only is it psychologically hard for individuals to objectively review data or theories that conflicts with their favoured opinions.  It is economically hard for companies (such as energy companies) to accept viewpoints that, if true, would cause major hurdles for their current lines of business, and significant loss of jobs.  On the other hand, just because researcher R has strong psychological reason P and/or strong economic incentive E in favour of advocating viewpoint V, it does not mean that viewpoint V is wrong.  The viewpoint could be correct, even though some of the support advanced in its favour is non-logical.  As I said, there’s lots of fog to navigate!

Despite all this uncertainty, I offer the following conclusions:

  • There is a wide range of possible outcomes, for the climate in the next few decades;
  • The probability of runaway global warming – with disastrous effects on sea levels, drought, agriculture, storms, species and ecosystem displacement, travel, business, and so on – is at least 20%, and likely higher;
  • Global warming won’t just make the temperature higher; it will make the weather more extreme – due to increased global temperature gradients, increased atmospheric water vapour, and higher sea temperatures that stir up more vicious storms.

A risk of at least 20% of a global environmental disaster deserves urgent attention and further analysis.  Who among us would enter an airplane with family and friends, if we believed there was a 20% probability of that airplane plummeting headlong out of the sky to the ground?

1.2 The economic crisis

The controversies and uncertainties over the potential threat of runaway climate change find parallels in discussions over a possible catastrophic implosion of the world economic system.  These discussions likewise abound with technical disagrements and vested interests.

Are governments, legislators, banks, and markets generally wise enough and capable to oversee the pressures of financial trading, and keep the systems afloat?  Was the recent series of domino-like collapses of famous banks around the world a “once in a lifetime” abnormality, that is most unlikely to repeat?  Or should we expect a recurrence of fundamental financial instability?  What is the risk of a larger financial crisis striking?  Indeed, what is the risk of adverse follow-on effects from the “tail end” of the 2008-2009 crisis, generating a so-called “double dip” in which the second dip is more drastic than the first?  On all these questions, opinions vary widely.

Despite the wide variation in opinions, some elements seem common.  All commentators are fearful of some potential causes of major disruption to global economics.  Depending on the commentator, these perceived potential causes include:

  • Clumsy regulation of financial markets;
  • Bankers who are able to take catastrophic risks in the pursuit of ever greater financial rewards;
  • The emergence of enormous monopoly powers that eliminate the benefits of marketplace competition;
  • Institutions that become “too big to fail” and therefore derail the appropriate workings of the market system;
  • Sky-high accumulation of debts, with individuals and countries living far beyond their means, for too long;
  • Austerity programmes that attempt to reduce debts quickly, but which could provoke spiraling industrial disputes and crippling strikes;
  • Bubbles that grow because “it’s temporarily rational for everyone to be irrational in their expectations” and then burst with tremendous damage.

We must avoid a feeling of overconfidence arising from the fact that previous financial crises were, in the end, survived, without the world of banking coming to an end.  First, these previous financial crises caused numerous local calamities – and the causes of major wars can be traced (in part) to these crises.  Second, there are reasons why future financial problems could have more drastic effects than previous ones:

  • There are numerous hidden interconnections between different parts of the global  economy, which accelerate negative feedback when individual parts fail;
  • The complexity of new financial products far outstrips the ability of senior managers and regulators to understand and appreciate the risks involved;
  • In an age of instant electronic connections, the speed of cascading events can catch us all flat-footed.

For these reasons, I tentatively suggest we assign a ballpark risk factor of about 20% to the probability of a major global financial meltdown during the 2010s.  (Yes, this is the same numeric figure as I picked for the environmental crisis too.)

Note some parallels between the two crises I’ve already discussed:

  • In each case, the devil is in the mix of weakly-understood powerful feedback systems;
  • Again in each case, our ability to discern what’s really happening is clouded by powerful non-rational factors and vested interests;
  • Again in each case, the probabilities of major disaster cannot be calculated in any precise way, but the risk appears large enough to warrant very serious investigation of solutions;
  • Again in each case, there is deep disagreement about the best solutions to deploy.

Worse, these two looming crises are themselves interconnected.  Shortage of resources such as clean energy could trigger large price hikes which throw national economies into tailspins.  Countries or regions which formerly cooperated could end up at devastating loggerheads, if an “abundance spirit” is replaced by a “scarcity spirit”.

1.3 The extreme terrorist crisis

What drives people to use bombs to inflict serious damage?  Depending on the cirumstance, it’s a combination of:

  • Positive belief, in support of some country, region, ideology, or religion;
  • Negative belief, in which a group of people (“the enemy”) are seen as despicable, inferior, or somehow deserving of destruction or punishment;
  • Peer pressure, where people feel constrained by those around them to follow through on a commitment (to become, for example, a suicide bomber);
  • Personal rage, such as a desire for revenge and humiliation;
  • Aspiration for personal glory and reward, in either the present life, or a presumed afterlife;
  • Failure of countervailing “pro-cooperation” and “pro-peace” instincts or systems.

Nothing here is new for the 2010s.  What is new is the increased ease of access, by would-be inflictors of damage, to so-called weapons of mass destruction.  There is a fair probability that the terrorists who piloted passenger jet airlines into the Twin Towers and the Pentagon would have willingly caused even larger amounts of turmoil and damage, if they could have put their hands on suitable weapons.

Technology itself is neutral.  A hammer which can be used to drive a nail into a piece of wood can equally be used to knock a fellow human unconscious.  Electricity can light up houses or fry someone in an electric chair.  Explosives can clear obstacles during construction projects or can obliterate critical infrastructure assets of so-called enemies.  Biochemical manipulation can yield wonderfully nutritious new food compounds or deadly new diseases.  Nuclear engineering can provide sufficient energy to free humanity from dependency on carbon-laden fossil fuels, or suitcase-sized portable weapons capable of tearing the heart out of major cities.

As technology becomes more widely accessible – via improved education worldwide, via cheaper raw materials, and via easy access to online information – the potential grows, both for good uses and for bad uses.  A saying attributed to Eliezer Yudkowsky gives us pause for thought:

The minimum IQ required to destroy the world drops by one point every 18 months.

(This saying is sometimes called “Moore’s Law of mad scientists“.)  The statement was probably not intended to be interpreted mathematically exactly, but we can agree that, over the course of a decade, the number of people capable of putting together a dreadful weapon of mass destruction will grow significantly.  The required brainpower will move from the rarified tails of the bell curve of intelligence distribution, in the direction of the more fully populated central region.

We can imagine similar “laws” of increasing likelihood of destructive capability:

The minimum IQ required to devise and deploy a weapon that wipes out the heart of a major city drops by one point every 18 months;

The minimum IQ required to poison the water table for a region drops by one point every 18 months;

The minimum IQ required to unleash a devastating plague drops by one point every 18 months…

Of course, the threat of nuclear annihilation has been with the world for half a century.  During my student days at Cambridge University, I participated in countless discussions about how best to avoid the risk of unintentional nuclear war.  Despite the forebodings of some of my contemporaries at the time, we reached the end of the 20th century unscathed.  Governments of nuclear-capable countries, regardless of their political hues and ideological positions, found good reason to avoid steps that could trigger any nuclear escalation.  What’s different with at least some fundamentalist terrorists is that they operate in a mental universe that is considerably more extreme:

  • They live for a life beyond the grave, rather than before it;
  • They believe that divine providence will take care of the outcome – any “innocents” caught up in the destruction will receive their own rewards in the afterlife, courtesy of an all-seeing, all-knowing deity;
  • They are nourished and inspired by apocalyptic writing that glorifies a vision of almighty destruction;
  • They operate with moral certainty: they seem to harbour no doubts or questions about the rightness of their course of action.

Mix this extreme mindset with sufficient raw brainpower and with weapons-grade materials that can be begged, bought, or stolen, and the stage is set for a terrorist outrage that will put 9/11 far into the shade.  In turn, the world’s reaction to that incident is likely to put the reaction to 9/11 far into its own shade.

It’s true, would-be terrorists are often incompetent.  Their explosives sometimes fail to detonate.  But that must give us no ground for complacency.  The same “incompetence” can sometimes result in unforeseen consequences that are even more destructive than those intended.

1.4 The sense of profound personal alienation

Environmental crisis.  Economic crisis.  Extreme terrorist crisis.  Added together, we might be facing a risk of around 50% that, sometime during the 2010s, we’ll collectively look back with enormous regret and say to ourselves:

That’s the worst thing that’s happened in our lifetime.  Why oh why didn’t we act to stop it happening?  But it’s too late to make amends now.  If only we could re-run history, and take wiser choices…

But there’s more.  Here’s a probability that I’ll estimate at 100%, rather than 50%.  It’s the probability that huge numbers of individuals will look at their lives with bitter regret, and say to themselves:

This outcome was very far from the best it could have been.  This human life has missed, by miles, the richness and quality of experience that was potentially available.  Why oh why did it turn out like this?  If only I could re-run my life, and take wiser choices, or benefit from improved circumstances…

The first three crises are global crises.  This fourth one is a personal crisis.  The first three are highly visible.  The fourth might just be an internal heartache.  It’s the realisation that:

  • Life provides, at least for some people, on at least some occasions, intense feelings of vitality, creativity, flow, rapport, ecstacy, and accomplishment;
  • These “peak experiences” are generally rare, or just glimpsed;
  • The majority of human experience is at a much lower level of quality than is conceivable.

The pervasive video broadcast communications of the modern age make it all the more obvious, to increasing numbers of people, that the quality of their lives fall short of what could be imagined and desired.  These same communications also strongly hint that technology is advancing to the point where it could soon free people from the limitations of their current existence, and enable levels of experience previously only imagined for deities.  Just around the corner lies the potential of lives that are much extended, expanded, and enhanced.  How frustrating to miss out on this potential!  It brings to mind the lamentations of a venerable French noblewoman from 1783, as noted in Lewis Lapham’s 2003 Commencement speech at St. John’s College Annapolis:

[A] French noblewoman, a duchess in her eighties, …, on seeing the first ascent of Montgolfier’s balloon from the palace of the Tuilleries in 1783, fell back upon the cushions of her carriage and wept. “Oh yes,” she said, “Now it’s certain. One day they’ll learn how to keep people alive forever, but I shall already be dead.”

Acts of gross destruction are often motivated by deep feelings of dissatisfaction or frustration: the world is perceived as containing significant wrongs, that need righting.  So there’s a connection between the crisis of profound personal alienation and the crisis of extreme terrorism.  Thankfully, people who experience dissatisfaction or frustration don’t all react in the same way.  But even if the reaction is only (as I suggested earlier) an internal heartache, the shortcoming between potential and reality is nonetheless profound.  Life could, and should, be so much better.

We can re-state the four crises as four huge opportunities:

  1. The opportunity to nurture an amazingly pleasant, refreshing, and intriguing environment;
  2. The opportunity to guide global economic development to sustainably create sufficient resources for everyone’s needs;
  3. The opportunity to utilise personal passions for constructive projects;
  4. The opportunity to enable individuals to persistently experience qualities of human life far, far higher than at present.

I see Humanity+ as addressing all four of these opportunities.  And it does so with an eye on one more crisis, which is the most uncertain one of the lot.

1.5 The existential crisis of accelerating change and deepening complexity

Time and again, changes have consequences that are unforeseen and unintended.  The more complex the system, the greater the likelihood of changes leading to unintended consequences.

However, human society is becoming more complex all the time:

  • Multiple different cultures and sub-cultures overlap, co-exist, and influence each other;
  • Worldwide travel is nowadays commonplace;
  • Increasing numbers of channels exist for communication and influence ;
  • Society is underpinned by a rich infrastructure of multi-layered technology.

Moreover, the rate of change is increasing:

  • New products sweep around the world in ever shorter amounts of time;
  • Larger numbers of people are being educated to levels never seen before, and are entering the worlds of research, development, manufacturing, and business;
  • Online collaboration mechanisms, including social networks, wikis, and open source software, mean it is easier for innovation in one part of the world to quickly influence and benefit subsequent innovation elsewhere;
  • The transformation of more industries from “matter-dominated” to “information-dominated” means that the rapid improvement cycle of semiconductors transforms the speed of progress.

These changes bring many benefits.  They also bring drawbacks, and – due to the law of unintended consequences – they bring lots of unknowns and surprises.  The risk is that we’ll waken up one morning and realise that we deeply regret one of the unforeseen side-effects.  For example, there are risks:

  • That some newly created microscopic-scale material will turn out to have deleterious effects on human life, akin (but faster acting) to the problems arising to exposure from asbestos;
  • That some newly engineered biochemical organism will escape into the wild and turn out to have an effect like that of a plague;
  • That well-intentioned attempts at climate “geo-engineering”, to counter the risk of global warming, will trigger unexpected fast-moving geological phenomenon;
  • That state-of-the-art high-energy physics experiments will somehow create unanticipated exotic new particles that destroy all nearby space and time;
  • That software defects will spread throughout part of the computing infrastructure of modern life, rendering it useless.

Here’s another example, from history.  On 1st March 1954, the US military performed their first test of a dry fuel hydrogen bomb, at the Bikini Atoll in the Marshall Islands.  The explosive yield was expected to be from 4 to 6 Megatons.  But when the device was exploded, the yield was 15 Megatons, two and a half times the expected maximum.  As the Wikipedia article on this test explosion explains:

The cause of the high yield was a laboratory error made by designers of the device at Los Alamos National Laboratory.  They considered only the lithium-6 isotope in the lithium deuteride secondary to be reactive; the lithium-7 isotope, accounting for 60% of the lithium content, was assumed to be inert…

Contrary to expectations, when the lithium-7 isotope is bombarded with high-energy neutrons, it absorbs a neutron then decomposes to form an alpha particle, another neutron, and a tritium nucleus.  This means that much more tritium was produced than expected, and the extra tritium in fusion with deuterium (as well as the extra neutron from lithium-7 decomposition) produced many more neutrons than expected, causing far more fissioning of the uranium tamper, thus increasing yield.

This resultant extra fuel (both lithium-6 and lithium-7) contributed greatly to the fusion reactions and neutron production and in this manner greatly increased the device’s explosive output.

Sadly, this calculation error resulted in much more radioactive fallout than anticipated.  Many of the crew in a nearby Japanese fishing boat, the Lucky Dragon No. 5, became ill in the wake of direct contact with the fallout.  One of the crew subsequently died from the illness – the first human casualty from thermonuclear weapons.

Suppose the error in calculation had been significantly worse – perhaps by an order of thousands rather than by a factor of 2.5.  This might seem unlikely, but when we deal with powerful unknowns, we cannot rule out powerful unforeseen consequences.  Imagine if extreme human activity somehow interfered with the incompletely understood mechanisms governing supervolcanoes – such as the one that exploded around 73,000 years ago at Lake Toba (Sumatra, Indonesia) and which is thought to have reduced the worldwide human population at the time to perhaps as few as one thousand breeding pairs.

It’s not just gargantuan explosions that we need fear.  As indicated above, the list of so-called “existential risks” includes highly contagious diseases, poisonous nano-particles, and catastrophic failures of the electronics infrastructure that underpins modern human society.  Add to these “known unknowns” the risk of “unknown unknowns” – the factors which we currently don’t even know that we should be considering.

The more quickly things change, the harder it is to foresee and monitor all the consequences.  There’s a great deal that deserves our attention.  How should we respond?

>> Next chapter >>

Blog at WordPress.com.