dw2

25 October 2015

Getting better at anticipating the future

History is replete with failed predictions. Sometimes pundits predict too much change. Sometimes they predict too little. Frequently they predict the wrong kinds of change.

Even those forecasters who claim a good track record for themselves sometime turn out, on closer inspection, to have included lots of wiggle room in their predictions – lots of scope for creative reinterpretation of their earlier words.

Of course, forecasts are often made for purposes other than anticipating the events that will actually unfold. Forecasts can serve many other goals:

  • Raising the profile of the forecaster and potentially boosting book sales or keynote invites – especially if the forecast is memorable, and is delivered in a confident style
  • Changing the likelihood that an event predicted will occur – either making it more likely (if the prediction is enthusiastic), or making it less likely (if the prediction is fearful)
  • Helping businesses and organisations to think through some options for their future strategy, via “scenario analysis”.

Given these alternative reasons why forecasters make predictions, it perhaps becomes more understandable that little effort is made to evaluate the accuracy of past forecasts. As reported by Alex Mayyasi,

Organizations spend staggering amounts of time and money trying to predict the future, but no time or money measuring their accuracy or improving on their ability to do it.

This bizarre state of affairs may be understandable, but it’s highly irresponsible, none the less. We can, and should, do better. In a highly uncertain, volatile world, our collective future depends on improving our ability to anticipate forthcoming developments.

Philip Tetlock

Mayyasi was referring to research by Philip Tetlock, a professor at the University of Pennsylvania. Over three decades, Tetlock has accumulated huge amounts of evidence about forecasting. His most recent book, co-authored with journalist Dan Gardner, is a highly readable summary of his research.

The book is entitled “Superforecasting: The Art and Science of Prediction”. I wholeheartedly recommend it.

Superforecasting

The book carries an endorsement by Nobel laureate Daniel Kahneman:

A manual for thinking clearly in an uncertain world. Read it.

Having just finished this book, I echo the praise it has gathered. The book is grounded in the field of geopolitical forecasting, but its content ranges far beyond that starting point. For example, the book can be viewed as one of the best descriptions of the scientific method – with its elevation of systematic, thoughtful doubt, and its search for ways to reduce uncertainty and eliminate bias. The book also provides a handy summary of all kinds of recent findings about human thinking methods.

“Superforecasting” also covers the improvements in the field of medicine that followed from the adoption of evidence-based medicine (in the face, it should be remembered, of initial fierce hostility from the medical profession). Indeed, the book seeks to accelerate a similar evidence-based revolution in the fields of economic and political analysis. It even has hopes to reduce the level of hostility and rancour that tends to characterise political discussion.

As such, I see the book as making an important contribution to the creation of a better sort of politics.

Summary of “Superforecasting”

The book draws on:

  • Results from four years of online competitions for forecasters held under the Aggregative Contingent Estimation project of IARPA (Intelligence Advanced Research Projects Activity)
  • Reflections from contest participants whose persistently scored highly in the competition – people who became known as ‘superforecasters’
  • Insight from the Good Judgement Project co-created by Tetlock
  • Reviews of the accuracy of predictions made publicly by politicians, political analysts, and media figures
  • Other research into decision-making, cognitive biases, and group dynamics.

Forecasters and superforecasters from the Good Judgement Project submitted more than 10,000 predictions over four years in response to questions about the likelihood of specified outcomes happening within given timescales over the following 3-12 months. Forecasts addressed the fields of geopolitics and economics.

The book highlights the following characteristics as being the cause of the success of superforecasters:

  • Avoidance of taking an ideological approach, which restricts the set of information that the forecaster considers
  • Pursuit of an evidence-based approach
  • Willingness to search out potential sources of disconfirming evidence
  • Willingness to incrementally adjust forecasts in the light of new evidence
  • The ability to break down estimates into a series of constituent questions that can, individually, be more easily calculated
  • The desire to obtain several different perspectives on a question, which can then be combined into an aggregate viewpoint
  • Comfort with mathematical and probabilistic reasoning
  • Adoption of careful, precise language, rather than vague terms (such as “might”) whose apparent meaning can change with hindsight
  • Acceptance of contingency rather than ‘fate’ or ‘inevitability’ as being the factor responsible for outcomes
  • Avoidance of ‘groupthink’ in which undue respect among team members prevents sufficient consideration of alternative viewpoints
  • Willingness to learn from past forecasting experiences – including both successes and failures
  • A growth mindset, in which personal characteristics and skill are seen as capable of improvement, rather than being fixed.

(This section draws on material I’ve added to H+Pedia earlier today. See that article for some links to further reading.)

Human pictures

Throughout “Superforecasting”, the authors provide the human backgrounds of the forecasters whose results and methods feature in the book. The superforecasters have a wide variety of backgrounds and professional experience. What they have in common, however – and where they differ from the other contest participants, whose predictions were less stellar – is the set of characteristics given above.

The book also discusses a number of well-known forecasters, and dissects the causes of their forecasting failures. This includes 9/11, the wars in Iraq, the Cuban Bay of Pigs fiasco, and many more. There’s much to learn from all these examples.

Aside: Other ways to evaluate futurists

Australian futurist Ross Dawson has recently created a very different method to evaluate the success of futurists. As Ross explains at http://rossdawson.com/futurist-rankings/:

We have created this widget to provide a rough view of how influential futurists are on the web and social media. It is not intended to be rigorous but it provides a fun and interesting insight into the online influence of leading futurists.

The score is computed from the number of Twitter followers, the Alexa score of websites, and the general Klout metric.

The widget currently lists 152 futurists. I was happy to find my name at #53 on the list. If I finish writing the two books I have in mind to publish over the next 12 months, I expect my personal ranking to climb 🙂

Yet another approach is to take a look at http://future.meetup.com/, the listing (by size) of the Meetup groups around the world that list “futurism” (or similar) as one of their interests. London Futurists, which I’ve been running (directly and indirectly) over the last seven years, features in third place on that list.

Of course, we futurists vary in the kind of topics we are ready (and willing) to talk to audiences abound. In my own case, I wish to encourage audiences away from “slow-paced” futurism, towards serious consideration of the possibilities of radical changes happening within just a few decades. These changes include not just the ongoing transformation of nature, but the possible transformation of human nature. As such, I’m ready to introduce the topic of transhumanism, so that audiences become more aware of the arguments both for and against this philosophy.

Within that particular subgrouping of futurist meetups, London Futurists ranks as a clear #1, as can be seen from http://transhumanism.meetup.com/.

Footnote

Edge has published a series of videos of five “master-classes” taught by Philip Tetlock on the subject of superforecasting:

  1. Forecasting Tournaments: What We Discover When We Start Scoring Accuracy
  2. Tournaments: Prying Open Closed Minds in Unnecessarily Polarized Debates
  3. Counterfactual History: The Elusive Control Groups in Policy Debates
  4. Skillful Backward and Forward Reasoning in Time: Superforecasting Requires “Counterfactualizing”
  5. Condensing it All Into Four Big Problems and a Killer App Solution

I haven’t had the time to view them yet, but if they’re anything like as good as the book “Superforecasting”, they’ll be well worth watching.

22 February 2013

Controversies over singularitarian utopianism

I shouldn’t have been surprised at the controversy that arose.

The cause was an hour-long lecture with 55 slides, ranging far and wide over a range of disruptive near-future scenarios, covering both upside and downside. The basic format of the lecture was: first the good news, and then the bad news. As stated on the opening slide,

Some illustrations of the enormous potential first, then some examples of how adding a high level of ambient stupidity might mean we might make a mess of it.

Ian PearsonThe speaker was Ian Pearson, described on his company website as “futurologist, conference speaker, regular media guest, strategist and writer”. The website continues, boldly,

Anyone can predict stuff, but only a few get it right…

Ian Pearson has been a full time futurologist since 1991, with a proven track record of over 85% accuracy at the 10 year horizon.

Ian was speaking, on my invitation, at the London Futurists last Saturday. His chosen topic was audacious in scope:

A Singularitarian Utopia Or A New Dark Age?

We’re all familiar with the idea of the singularity, the end-result of rapid acceleration of technology development caused by positive feedback. This will add greatly to human capability, not just via gadgets but also through direct body and mind enhancement, and we’ll mess a lot with other organisms and AIs too. So we’ll have superhumans and super AIs as part of our society.

But this new technology won’t bring a utopia. We all know that some powerful people, governments, companies and terrorists will also add lots of bad things to the mix. The same technology that lets you enhance your senses or expand your mind also allows greatly increased surveillance and control, eventually to the extremes of direct indoctrination and zombification. Taking the forces that already exist, of tribalism, political correctness, secrecy for them and exposure for us, and so on, it’s clear that the far future will be a weird mixture of fantastic capability, spoiled by abuse…

There were around 200 people in the audience, listening as Ian progressed through a series of increasingly mind-stretching technology opportunities. Judging by the comments posted online afterwards, some of the audience deeply appreciated what they heard:

Thank you for a terrific two hours, I have gone away full of ideas; I found the talk extremely interesting indeed…

I really enjoyed this provocative presentation…

Provocative and stimulating…

Very interesting. Thank you for organizing it!…

Amazing and fascinating!…

But not everyone was satisfied. Here’s an extract from one negative comment:

After the first half (a trippy sub-SciFi brainstorm session) my only question was, “What Are You On?”…

Another audience member wrote his own blogpost about the meeting:

A Singularitanian Utopia or a wasted afternoon?

…it was a warmed-over mish-mash of technological cornucopianism, seasoned with Daily Mail-style reactionary harrumphing about ‘political correctness gone mad’.

These are just the starters of negative feedback; I’ll get to others shortly. As I review what was said in the meeting, and look at the spirited ongoing exchange of comments online, some thoughts come to my mind:

  • Big ideas almost inevitably provoke big reactions; this talk had a lot of particularly big ideas
  • In some cases, the negative reactions to the talk arise from misunderstandings, due in part to so much material being covered in the presentation
  • In other cases, Isee the criticisms as reactions to the seeming over-confidence of the speaker (“…a proven track record of over 85% accuracy”)
  • In yet other cases, I share the negative reactions the talk generated; my own view of the near-future landscape significantly differs from the one presented on stage
  • In nearly all cases, it’s worth taking the time to progress the discussion further
  • After all, if we get our forecasts of the future wrong, and fail to make adequate preparations for the disruptions ahead, it could make a huge difference to our collective well-being.

So let’s look again at some of the adverse reactions. My aim is to raise them in a way that people who didn’t attend the talk should be able to follow the analysis.

(1) Is imminent transformation of much of human life a realistic scenario? Or are these ideas just science fiction?

NBIC SingularityThe main driver for belief in the possible imminent transformation of human life, enabled by rapidly changing technology, is the observation of progress towards “NBIC” convergence.

Significant improvements are taking place, almost daily, in our capabilities to understand and control atoms (Nano-tech), genes and other areas of life-sciences (Bio-tech), bits (Info-comms-tech), and neurons and other areas of mind (Cogno-tech). Importantly, improvements in these different fields are interacting with each other.

As Ian Pearson described the interactions:

  • Nanotech gives us tiny devices
  • Tiny sensors help neuroscience figure out how the mind works
  • Insights from neuroscience feed into machine intelligence
  • Improving machine intelligence accelerates R&D in every field
  • Biotech and IT advances make body and machine connectable

Will all the individual possible applications of NBIC convergence described by Ian happen in precisely the way he illustrated? Very probably not. The future’s not as predictable as that. But something similar could well happen:

  • Cheaper forms of energy
  • Tissue-cultured meat
  • Space exploration
  • Further miniaturisation of personal computing (wearable computing, and even “active skin”)
  • Smart glasses
  • Augmented reality displays
  • Gel computing
  • IQ and sensory enhancement
  • Dream linking
  • Human-machine convergence
  • Digital immortality: “the under 40s might live forever… but which body would you choose?”

(2) Is a focus on smart cosmetic technology an indulgent distraction from pressing environmental issues?

Here’s one of the comments raised online after the talk:

Unfortunately any respect due was undermined by his contempt for the massive environmental challenges we face.

Trivial contact lens / jewellery technology can hang itself, if our countryside is choked by yoghurt factory fumes.

The reference to jewellery took issue with remarks in the talk such as the following:

Miniaturisation will bring everyday IT down to jewellery size…

Decoration; Social status; Digital bubble; Tribal signalling…

In contrast, the talk positioned greater use of technology as the solution to environmental issues, rather than as something to exacerbate these issues. Smaller (jewellery-sized) devices, created with a greater attention to recyclability, will diminish the environmental footprint. Ian claimed that:

  • We can produce more of everything than people need
  • Improved global land management could feed up to 20 billion people
  • Clean water will be plentiful
  • We will also need less and waste less
  • Long term pollution will decline.

Nevertheless, he acknowledged that there are some short-term problems, ahead of the time when accelerating NBIC convergence can be expected to provide more comprehensive solutions:

  • Energy shortage is a short to mid term problem
  • Real problems are short term.

Where there’s room for real debate is the extent of these shorter-term problems. Discussion on the threats from global warming brought these disagreements into sharp focus.

(3) How should singularitarians regard the threat from global warming?

BalanceTowards the end of his talk, Ian showed a pair of scales, weighing up the wins and losses of NBIC technologies and a potential singularity.

The “wins” column included health, growth, wealth, fun, and empowerment.

The “losses” column included control, surveillance, oppression, directionless, and terrorism.

One of the first questions from the floor, during the Q&A period in the meeting, asked why the risk of environmental destruction was not on the list of possible future scenarios. This criticism was echoed by online comments:

The complacency about CO2 going into the atmosphere was scary…

If we risk heading towards an environmental abyss let’s do something about what we do know – fossil fuel burning.

During his talk, I picked up on one of Ian’s comments about not being particularly concerned about the risks of global warming. I asked, what about the risks of adverse positive feedback cycles, such as increasing temperatures triggering the release of vast ancient stores of methane gas from frozen tundra, accelerating the warming cycle further? That could lead to temperature increases that are much more rapid than presently contemplated, along with lots of savage disturbance (storms, droughts, etc).

Ian countered that it was a possibility, but he had the following reservations:

  • He thought these positive feedback loops would only kick into action when baseline temperature rose by around 2 degrees
  • In the meantime, global average temperatures have stopped rising, over the last eleven years
  • He estimates he spends a couple of hours every day, keeping an eye on all sides of the global warming debate
  • There are lots of exaggerations and poor science on both sides of the debate
  • Other factors such as the influence of solar cycles deserve more research.

Here’s my own reaction to these claims:

  • The view that global average temperatures  have stopped rising, is, among serious scientists, very much a minority position; see e.g. this rebuttal on Carbon Brief
  • Even if there’s only a small probability of a runaway spurt of accelerated global warming in the next 10-15 years, we need to treat that risk very seriously – in the same way that, for example, we would be loath to take a transatlantic flight if we were told there was a 5% chance of the airplane disintegrating mid-flight.

Nevertheless, I did not want the entire meeting to divert into a debate about global warming – “that deserves a full meeting in its own right”, I commented, before moving on to the next question. In retrospect, perhaps that was a mistake, since it may have caused some members of the audience to mentally disengage from the meeting.

(4) Are there distinct right-wing and left-wing approaches to the singularity?

Here’s another comment that was raised online after the talk:

I found the second half of the talk to be very disappointing and very right-wing.

And another:

Someone who lists ‘race equality’ as part of the trend towards ignorance has shown very clearly what wing he is on…

In the second half of his talk, Ian outlined changes in norms of beliefs and values. He talked about the growth of “religion substitutes” via a “random walk of values”:

  • Religious texts used to act as a fixed reference for ethical values
  • Secular society has no fixed reference point so values oscillate quickly.
  • 20 years can yield 180 degree shift
  • e.g. euthanasia, sexuality, abortion, animal rights, genetic modification, nuclear energy, family, policing, teaching, authority…
  • Pressure to conform reinforces relativism at the expense of intellectual rigour

A complicating factor here, Ian stated, was that

People have a strong need to feel they are ‘good’. Some of today’s ideological subscriptions are essentially secular substitutes for religion, and demand same suspension of free thinking and logical reasoning.

Knowledge GraphA few slides later, he listed examples of “the rise of nonsense beliefs”:

e.g. new age, alternative medicine, alternative science, 21st century piety, political correctness

He also commented that “99% are only well-informed on trivia”, such as fashion, celebrity, TV culture, sport, games, and chat virtual environments.

This analysis culminated with a slide that personally strongly resonated with me: a curve of “anti-knowledge” accelerating and overtaking a curve of “knowledge”:

In pursuit of social compliance, we are told to believe things that are known to be false.

With clever enough spin, people accept them and become worse than ignorant.

So there’s a kind of race between “knowledge” and “anti-knowledge”.

One reason this resonated with me is that it seemed like a different angle on one of my own favourite metaphors for the challenges of the next 15-30 years – the metaphor of a dramatic race:
Race

  • One runner in the race is “increasing rationality, innovation, and collaboration”; if this runner wins, the race ends in a positive singularity
  • The other runner in the race is “increasing complexity, rapidly diminishing resources”; if this runner wins, the race ends in a negative singularity.

In the light of Ian’s analysis, I can see that the second runner is aided by the increase of anti-knowledge: over-attachment to magical, simplistic, ultimately misleading worldviews.

However, it’s one thing to agree that “anti-knowledge” is a significant factor in determining the future; it’s another thing to agree which sets of ideas count as knowledge, and which as anti-knowledge! One of Ian’s slides included the following list of “religion substitutes”:

Animal rights, political correctness, pacifism, vegetarianism, fitness, warmism, environmentalism, anti-capitalism

It’s no wonder that many of the audience felt offended. Why list “warmism” (a belief in human-caused global warming), but not “denialism” (denial of human-caused global warming? Why list “anti-capitalism” but not “free market fundamentalism”? Why list “pacifism” but not “militarism”?

One online comment made a shrewd observation:

Ian raised my curiosity about ‘false beliefs’ (or nonsense beliefs as Ian calls them) as I ‘believe’ we all inhabit different belief systems – so what is true for one person may be false for another… at that exact moment in time.

And things can change. Once upon a time, it was a nonsense belief that the world was round.

There may be 15% of truth in some nonsense beliefs…or possibly even 85% truth. Taking ‘alternative medicine’ as an example of one of Ian’s nonsense beliefs – what if two of the many reasons it was considered nonsense were that (1) it is outside the world (the system) of science and technology and (2) it cannot be controlled by the pharmaceutical companies (perhaps our high priests of today)?

(5) The role of corporations and politicians in the approach to the singularity

One place where the right-wing / left-wing division becomes more acute in the question of whether anything special needs to be done to control the behaviour of corporations (businesses).

One of Ian’s strong positive recommendations, at the end of his presentation, was that scientists and engineers should become more actively involved in educating the general public about issues of technology. Shortly afterward, the question came from the floor: what about actions to educate or control corporations? Ian replied that he had very little to recommend to corporations, over and above his recommendations to the individuals within these corporations.

My own view is different. From my life inside industry, I’ve seen numerous cases of good people who are significantly constrained in their actions by the company systems and metrics in which they find themselves enmeshed.

Indeed, just as people should be alarmed about the prospects of super-AIs gaining too much power, over and above the humans who created them, we should also be alarmed about the powers that super-corporations are accumulating, over and above the powers and intentions of their employees.

The argument to leave corporations alone finds its roots in ideologies of freedom: government regulation of corporations often has undesirable side-effects. Nevertheless, that’s just an argument for being smarter and more effective in how the regulation works – not an argument to abstain from regulation altogether.

The question of the appropriate forms of collaborative governance remains one of the really hard issues facing anyone concerned about the future. Leaving corporations to find their own best solutions is, in my view, very unlikely to be the optimum approach.

In terms of how “laissez-faire” we should be, in the face of potential apocalypse down the road, I agree with the assessment near the end of Jeremy Green’s blogpost:

Pearson’s closing assertion that in the end our politicians will always wake up and pull us back from the brink of any disaster is belied by many examples of civilisations that did not pull back and went right over the edge to destruction.

Endnote:

After the presentation in Birkbeck College ended, around 40-50 of the audience regrouped in a nearby pub, to continue the discussion. The discussion is also continuing, at a different tempo, in the online pages of the London Futurists meetup. Ian Pearson deserves hearty congratulation for stirring up what has turned out to be an enlightening discussion – even though there’s heat in the comments as well as light!

Evidently, the discussion is far from complete…

20 February 2013

The world’s most eminent sociologist highlights the technological singularity

It’s not every day that the world’s most eminent sociologist reveals himself as having an intense interest in the Technological Singularity, and urges that “Everyone should read the books of Ray Kurzweil”. That’s what happened this evening.

The speaker in question was Lord Anthony Giddens, one of whose many claims to fame is his description as “Tony Blair’s guru”.

His biography states that, “According to Google Scholar, he is the most widely cited sociologist in the world today.”

In support of that claim, a 2009 article in the Times Higher Education supplement notes the following:

Giddens trumps Marx…

A list published today by Times Higher Education reveals the most-cited academic authors of books in the humanities…

As one of the world’s pre-eminent sociologists, Anthony Giddens, the Labour peer and former director of the London School of Economics, will be used to academic accolades.

But even he may be pleased to hear that his books are cited more often than those of iconic thinkers such as Sigmund Freud and Karl Marx.

Lord Giddens, now emeritus professor at LSE and a life fellow at King’s College, Cambridge, is the fifth most-referenced author of books in the humanities, according to the list produced by scientific data analysts Thomson Reuters.

The only living scholar ranked higher is Albert Bandura, the Canadian psychologist and pioneer of social learning theory at Stanford University…

Freud enters the list in 11th place. The American linguist and philosopher Noam Chomsky, who is based at the Massachusetts Institute of Technology and whose political books have a broader readership than some of his peers in the list, is 15th…

Lord Giddens is now 75 years old. Earlier this evening, I saw for myself evidence of his remarkable calibre. He gave an hour-long lecture in front of a packed audience at the London School of Economics, without any notes or slides, and without any hesitation, deviation, or verbal infelicity. Throughout, his remarks bristled with compelling ideas. He was equally competent – and equally fluent – when it came to the question-and-answer portion of the event.

LSE Events

The lecture was entitled “Off the edge of history: the world in the 21st century”. From its description on the LSE website, I had already identified it as relevant to many of the themes that I seek to have discussed in the series of London Futurists meetups that I chair:

The risks we face, and the opportunities we have, in the 21st century are in many respects quite different from those experienced in earlier periods of history. How should we analyse and respond to such a world? What is a rational balance of optimism and pessimism? How can we plan for a future that seems to elude our grasp and in some ways is imponderable?

As the lecture proceeded, I was very pleasantly impressed by the sequence of ideas. I append here a lightly edited copy of the verbatim notes I took on my Psion Series 5mx, supplemented by a few additions from the #LSEGiddens tweet stream. Added afterwards: the LSE has made a podcast available of the talk.

My rough notes from the talk follow… (text in italics are my parenthetical comments)

This large lecture room is completely full, twenty minutes before the lecture is due to start. I’m glad I arrived early!

Today’s topic is work in progress – he’s writing a book on the same topic, “Off the edge of history”.

  • Note this is a very different thesis from “the end of history”.

His starting point is in the subject of geology – a long way from sociology. He’s been working on climate change for the last seven years. It’s his first time to work so closely with scientists.

Geologists tend to call the present age “the Holocene age” – the last 12,000 years. But a geologist called Paul Crutzen recommended that we should use a different term for the last 200 years or so – we’re now in the Anthropocene age:

  • In this period, human activity strongly influences nature and the environment
  • This re-orients and restructures the world of geology
  • A great deal of what used to be natural, is natural no longer
  • Human beings are invading nature, in a way that has no precedent
  • Even some apparently natural catastrophes, like tsunamis and volcanoes, might be linked to impacts from humans.

We have continuities from previous history (of course), but so many things are different nowadays. One example is the impacts of new forms of biological threat. Disease organisms have skipped from animals to human beings. New disease organisms are being synthesised.

There are threats facing us, which are in no ways extensions of previous threats.

For example, what is the Internet doing to the world? Is it a gigantic new mind? Are you using the mobile phone, or is the mobile phone using you? There’s no parallel from previous periods. Globally connected electronic communications are fundamentally different from what went before.

When you are dealing with risks you’ve never experienced before, you can’t measure them. You’ll only know for sure when it’s too late. We’re on the edge of history because we are dealing with risks we have never faced before.

Just as we are invading nature, we are invading human nature in a way that’s unprecedented.

Do you know about the Singularity? (A smattering of people in the audience raise their hands.) It’s mind-blowing. You should find out about it:

  • It’s based on a mathematical concept
  • It’s accelerating processes of growth, rapidly disappearing to a far off point very different from today.

Everyone should read the books of Ray Kurzweil – who has recently become an Engineering Director at Google.

Kurzweil’s book makes it clear that:

  • Within our lifetimes, human beings will no longer be human beings
  • There are multiple accelerating rates of change in several different disciplines
  • The three main disciplines contributing to the singularity are nanotech, AI, and biotech
  • All are transforming our understanding of the human body and, more importantly, the human mind
  • This is described by the “Law of accelerating returns”
  • Progress is not just linear but geometrical.

This book opens our minds to multiple possibilities of what it means to be human, as technology penetrates us.

Nanotech is like humans playing God:

  • It’s a level below DNA
  • We can use it to rebuild many parts of the human body, and other artefacts in the world.

Kurzweil states that human beings will develop intelligence which is 100x higher than at present:

  • Because of merging of human bodies with computers
  • Because of the impact of nanotech.

Kurzweil gives this advice: if you are relatively young: live long, in order to live forever:

  • Immortality is no longer a religious concept, it’s now a tangible prospect
  • It could happen in the next 20-40 years.

This is a fantastic expansion of what it means to be human. Importantly, it’s a spread of opportunities and risk.

These were religious notions before. Now we have the real possibility of apocalypse – we’ve had it since the 1950s, when the first thermonuclear weapons were invented. The possibility of immortality has become real too.

We don’t know how to chart these possibilities. None of us know how to fill in that gap.

What science fiction writers were writing 20 years ago, is now in the newspapers everyday. Reading from the Guardian from a couple of days ago:

Paralysed people could get movement back through thought control

Brain implant could allow people to ‘feel’ the presence of infrared light and one day be used to move artificial limbs

Scientists have moved closer to allowing paralysed people to control artificial limbs with their thoughts following a breakthrough in technology…

…part of a series of sessions on advances in brain-machine interfaces, at which other scientists presented a bionic hand that could connect directly to the nerves in a person’s arm and provide sensory feedback of what they were holding.

Until now, neurological prosthetics have largely been demonstrated as a way to restore a loss of function. Last year, a 58-year-old woman who had become paralysed after a stroke demonstrated that she could use a robotic arm to bring a cup of coffee to her mouth and take a sip, just by thinking about it…

In the future…  it might be possible to use prosthetic devices to restore vision – for example, if a person’s visual cortex had been damaged – by training a different part of the brain to process the information.

Or you could even augment normal brain function in non-invasive ways to deliver the information.

We could learn to detect other sorts of signals that we normally don’t see or experience; the perceptual range could increase.

These things are real; these things are happening. There is a kind of geometric advance.

The literature of social scientists has a big division here, between doomsday thinkers and optimists, with respected thinkers in both camps.

Sir Martin Rees is example of first category. He wrote a book called “Our final century”:

  • It examines forms of risk that could destroy our society
  • Climate change is a huge existential risk – most people aren’t aware of it
  • Nanotech is another existential risk – grey goo scenario
  • We also have lots of weaponry: drones circulating above the world even as we speak
  • Most previous civilisations have ended in disaster – they subverted themselves
  • For the first time, we have a civilisation on a global scale
  • It could well be our final century.

Optimists include Matt Ridley, a businessman turned scientist, and author of the book “The rational optimist”:

  • Over the course of human civilisation there is progress – including progress in culture, and medical advances.

This is a big division. How do we sort this out? His view: it’s not possible to decide. We need to recognise that we live in a “high opportunity, high risk society”:

  • The level of opportunity and level of risk are both much higher than before
  • But risk and opportunity always intertwine
  • “In every risk there’s an opportunity…” and vice versa
  • We must be aware of the twists and tangles of risk and opportunity – their interpenetration.

Studying this area has led him to change some of his views from before:

  • He now sees the goal of sustainability as a harder thing than before
  • Living within our limits makes sense, but we no longer know what our limits are
  • We have to respect limits, but also recognise that limits can be changed.

For example, could we regard a world population of 9 billion people as an opportunity, rather than just a risk?

  • It would lead us to put lots more focus on food innovation, blue sky tech for agriculture, social reform, etc – all good things.

A few points to help us sort things out:

  1. One must never avoid risk – we live in a world subject to extreme system risk; we mustn’t live in denial of risk in our personal life (like denying the risks of smoking or riding motor cycles) or at an civilisational level
  2. We have to think about the future in a very different way, because the future has become opaque to us; the enlightenment thought was that we would march in and make sense of history (Marx had similar thoughts), but it turns out that the future is actually opaque – for our personal lives too as well as society (he wonders whether the EU will still exist by the time he finishes his book on the future of the EU!)
  3. We’ll have to learn to backcast rather than forecast – to borrow an idea from the study of climate change. We have to think ahead, and then think back.

This project is the grand task of social sciences in the 21st century.

One more example: the possibility of re-shoring of jobs in the US and EU:

  • 3D printing is an unbelievable technological invention
  • 3D printers can already print shoes
  • A printer in an MIT lab can print whole systems – eg in due course a plane which will fly directly out of the computer
  • This will likely produce a revolution in manufacturing – many, many implications.

Final rhetorical question: As we confront this world, should we be pessimists or optimists? This is the same question he used to consider, at the end of the talks he used to give on climate change.

His answer: we should bracket out that opposition; it’s much more important to be rational than either pessimist or optimist:

  • Compare the case of someone with very serious cancer – they need more than wishful thinking. Need rational underpinning of optimism and/or pessimism.

Resounding applause from the audience. Then commence questions and answers.

Q: Are today’s governance structures, at local and national levels, fit to deal with these issues?

A: No. For example, the he European Union has proved not to be the vanguard of global governance that we hoped it would be. Climate change is another clear example: twenty years of UN meetings with no useful outcome whatsoever.

Q: Are our human cognitive powers capable to deal with these problems? Is there a role for technology to assist our cognitive powers?

A: Our human powers are facing a pretty difficult challenge. It’s human nature to put off what we don’t have to do today. Like 16 years taking up smoking who can’t really see themselves being 40. Maybe a supermind might be more effective.

Q: Although he has given examples where current governance models are failing, are there any bright spots of hope for governance? (The questioner in this case was me.)

A: There are some hopeful signs for economic governance. Surely bankers will not get away with what they’ve done. Movement to address tax havens (“onslaught”) – bring the money back as well as bringing the jobs back. Will require global co-operation. Nuclear proliferation (Iran, Israel) is as dangerous as climate change. The international community has done quite well with non-proliferation, but it only takes one nuclear war for things to go terribly wrong.

Q: What practical advice would he give to the Prime Minister (or to Ed Miliband)?

A: He supports Ed Miliband trying to restructure capitalism; there are similar moves happening in the US too. However, with global issues like these, any individual prime minister is limited in his influence. For better or for worse, Ray Kurzweil has more influence than any politician!

(Which is a remarkable thing to say, for someone who used to work so closely with Prime Minister Tony Blair…)

1 January 2012

Planning for optimal ‘flow’ in an uncertain world

Filed under: Agile, books, critical chain, flow, lean, predictability — David Wood @ 1:44 pm

In a world with enormous uncertainty, what is the best planning methodology?

I’ve long been sceptical about elaborate planning – hence my enthusiasm for what’s often called ‘agile‘ and ‘lean‘ development processes.  Indeed, I devoted a significant chunk of my book “Symbian for software leaders – principles of successful smartphone development projects” to comparing and contrasting the “plan is king” approach to an agile approach.

But the passage of time accumulates deeper insight.  Key thinkers in this field now refer to “second generation lean product development”.  Perhaps paramount among these thinkers is the veteran analyst of best practice in new product development, Donald Reinertsen.  I’ve been influenced by his ideas more than once in my career already:

  • In the early 1990s, while I was a software engineering manager at Psion, my boss at the time recommended I read Reinertsen’s “Developing Products in Half the Time“. It was great advice!
  • In the early 200xs, while I was EVP at Symbian, I remember enjoying insights from Reinsertsen’s “Managing the Design Factory“.

I was recently pleased to discover Reinertsen has put pen to paper again.  The result is “The Principles of Product Development Flow: Second Generation Lean Product Development“.

The following Amazon.com review of the latest book, by Maurice Hagar, persuaded me to purchase that book:

This new standard on lean product and software development challenges orthodox thinking on every side and is required reading. It’s fairly technical and not an easy read but well worth the effort.

For the traditionalist, add to cart if you want to learn:

  • Why prioritizing work “on the basis of project profitability measures like return on investment (ROI)” is a mistake
  • Why we should manage queues instead of timelines
  • Why “trying to estimate the amount of work in queue” is a waste of time
  • Why our focus on efficiency, capacity utilization, and preventing and correcting deviations from the plan “are fundamentally wrong”
  • Why “systematic top-down design of the entire system” is risky
  • Why bottom-up estimating is flawed
  • Why reducing defects may be costing us money
  • Why we should “watch the work product, not the worker”
  • Why rewarding specialization is a bad idea
  • Why centralizing control in project management offices and information systems is dangerous
  • Why a bad decision made rapidly “is far better” than the right decision made late and “one of the biggest mistakes a leader could make is to stifle initiative”
  •  Why communicating failures is more important than communicating successes

For the Agilist, add to cart if you want to learn:

  • Why command-and-control is essential to prevent misalignment, local optimization, chaos, even disaster
  • Why traditional conformance to a plan and strong change control and risk management is sometimes preferable to adaptive management
  • Why the economies of scale from centralized, shared resources are sometimes preferable to dedicated teams
  • Why clear roles and boundaries are sometimes preferable to swarming “the way five-year-olds approach soccer”
  • Why predictable behavior is more important than shared values for building trust and teamwork
  • Why even professionals should have synchronized coffee breaks…

Even in the first few pages, I’ve found some cracking good quotes.

Here’s one on economics and “the cost of late changes”:

Our central premise is that we do product development to make money.  This economic goal permits us to use economic thinking and allows us to see many issues with a fresh point of view.  It illuminates the grave problems with the current orthodoxy.

The current orthodoxy does not focus on understanding deeper economic relationships.  Instead, it is, at best, based on observing correlations between pairs of proxy variables.  For example, it observes that late design changes have higher costs than early design changes, and prescribes front-loading problem solving.  This ignores the fact that late changes can also create enormous economic value.  The economic effect of a late change can only be evaluated by considering its complete economic impact.

And on “worship of conformance”:

In addition to deeply misunderstanding variability, today’s product developers have deep-rooted misconceptions on how to react to this variability.  They believe that they should always strive to make actual performance conform to the original plan.  They assume that the benefit of correcting a deviation from the plan will always exceed the cost of doing so.  This places completely unwarranted trust in the original plan, and it blocks companies from exploiting emergent opportunities.  Such behaviour makes no economic sense.

We live in an uncertain world.  We must recognise that our original plan was based on noisy data, viewed from a long time-horizon…  Emergent information completely changes the economics of our original choice.  In such cases, blindly insisting on conformance to the original plan destroys economic value.

To manage product development effectively, we must recognise that valuable new information is constantly arriving throughout the development cycle.  Rather than remaining frozen in time, locked to the original plan, we must learn to make good economic choices using this emerging information.

Conformance to the original plan has become another obstacle blocking our ability to make good economic choices.  Once again, we have a case of a proxy variable, conformance, obscuring the real issue, which is making good economic decisions…

Next, on flow control and the sequencing of tasks:

We are interested in finding economically optimum sequences for tasks.  Current practices use fairly crude approaches to sequencing.

For example, it suggests that if subsystem B depends on subsystem A, it would be better to sequence the design of A first.  This logic optimises efficiency as a proxy variable.  When we consider overall economics, as we do in this book, we often reach different conclusions.  For example, it may be better to develop both A and B simultaneously, despite the risk of inefficient rework, because parallel development can save cycle time.

In this book, our model for flow control will not be manufacturing systems, since these systems primarily deal with predictable and homogeneous flows.  Instead, we will look at lessons that can be learned from telecommunications networks and computer operating systems.  Both of these domains have decades of experience dealing with non-homogeneous and highly variable flows.

Finally, on fast feedback:

Developers rely on feedback to influence subsequent choices.  Or, at least, they should.  Unfortunately, our current orthodoxy views feedback as an element of an undesirable rework loop.  It asserts that we should prevent the need for rework by having engineers design things right the first time.

We will present a radically different view, suggesting that feedback is what permits us to operate our product development process effectively in a very noisy environment.  Feedback allows us to efficiently adapt to unpredictability.

To be clear, Reinertsen’s book doesn’t just point out issues with what he calls “current practice” or “orthodoxy”.  He also points out shortcomings in various first generation lean models, such as Eliyahu Goldratt’s “Critical Chain” methodology (as described in Goldratt’s “Theory of Constraints”), and Kanban.  For example, in discussing the minimisation of Work In Process (WIP) inventory, Reinertsen says the following:

WIP constraints are a powerful way to gain control over cycle time in the presence of variability.  This is particularly important where variability accumulates, such as in product development…

We will discuss two common methods of constraining WIP: the kanban system and Goldratt’s Theory of Constraints.  These methods are relatively static.  We will also examine how telecommunications networks use WIP constraints in a much more dynamic way.  Once again, telecommunications networks are interesting to us as product developers, because they deal successfully with inherently high variability.

Hopefully that’s a good set of tasters for what will follow!

5 April 2010

The ascent of money: huge opportunities and huge risks

Filed under: books, Economics, predictability — David Wood @ 9:36 pm

The turning point of the American Civil War.  The defeat of Napoleon.  The lead-up to the French Revolution.  The decline of Imperial Spain.  These chapters of history all have intriguing back stories – according to Harvard professor Niall Ferguson, in his book “The Ascent of Money: A Financial History of the World“.

The back stories, each time, refer to the strengths and weakness of evolving financial systems.

Appreciating these back stories isn’t just an intellectual curiosity.  It provides rich context for the view that financial systems are sophisticated and complex entities that deserve much wider understanding.  Without this understanding, it’s all too easy for people to hold one or other overly-simplistic understanding of financial systems, such as:

  • Financial systems are all fundamentally flawed;
  • Financial systems are all fundamentally beneficial;
  • There are “sure thing” investments which people can learn about;
  • Financial systems should be picked apart – the world would be better off without them;
  • Markets are inherently insane;
  • Markets are inherently sane;
  • Bankers (and their ilk) deserve our scorn;
  • Bankers (and their ilk) deserve our deep gratitude.

As the book progresses, Ferguson sweeps forwards and backwards throughout history, gradually building up a fuller picture of evolving financial systems:

  • The banking system;
  • Government bonds;
  • Stock markets;
  • Insurance and securities;
  • The housing market;
  • Hedge funds;
  • Globalisation;
  • The growing role of China in financial systems.

Like me, Ferguson was born in Scotland.  I was struck by the number of Scots-born heroes and villains the book introduces, including an infamous Glaswegian loan shark, the creators of the first true insurance company, officers of the companies involved in the Anglo-China “Opium Wars”, and Andrew Law – instigator in France of one of history’s first great stock market bubbles.  Of course, many non-Scots have starring roles too – including Shakespeare’s Shylock, the Medicis, the Rothschilds, George Soros, the managers of Enron, Milton Friedman, and John Maynard Keynes.

Time and again, Ferguson highlights lessons for the present day.  Yes, new financial systems can liberate great amounts of creativity.  Innovation in financial systems can provide significant benefits for society.  But, at the same time, financial systems can be mis-managed, with dreadful consequences.  One major contributory cause of mis-managing these systems is when people lack a proper historical perspective – for example, when the experience of leading financiers is just of times of growth, rather than times of savage decline.

Among many fascinating episodes covered in the book, I found two to be particularly chilling:

  • The astonishing (in retrospect) over-confidence of observers in the period leading up to the First World War, that any such war could not possibly happen;
  • The astonishing (in retrospect) over-confidence of the managers of the Long Term Capital Management (LTCM) hedge fund, that their fund could not possibly fail.

Veteran journalist Hamish McRae describes some of the pre-WWI thinking in his review of Ferguson’s book in The Independent:

The 19th-century globalisation ended with the catastrophe of the First World War. It is really scary to realise how unaware people were of the fragility of those times. In 1910, the British journalist Norman Angell published The Great Illusion, in which he argued that war between the great powers had become an economic impossibility because of “the delicate interdependence of international finance”.

In spring 1914 an international commission reported on the Balkan Wars of 1912-13. The British member of the commission, Henry Noel Brailsford, wrote: “In Europe the epoch of conquest is over and save in the Balkans perhaps on the fringes of the Austrian and Russian empires, it is as certain as anything in politics that the frontiers of our national states are finally drawn. My own belief is that there will be no more war among the six powers.”

And Ferguson re-tells the story of LTCM in his online article “Wall Street Lays Another Egg” (which also covers many of the other themes from his book):

…how exactly do you price a derivative? What precisely is an option worth? The answers to those questions required a revolution in financial theory. From an academic point of view, what this revolution achieved was highly impressive. But the events of the 1990s, as the rise of quantitative finance replaced preppies with quants (quantitative analysts) all along Wall Street, revealed a new truth: those whom the gods want to destroy they first teach math.

Working closely with Fischer Black, of the consulting firm Arthur D. Little, M.I.T.’s Myron Scholes invented a groundbreaking new theory of pricing options, to which his colleague Robert Merton also contributed. (Scholes and Merton would share the 1997 Nobel Prize in economics.) They reasoned that a call option’s value depended on six variables: the current market price of the stock (S), the agreed future price at which the stock could be bought (L), the time until the expiration date of the option (t), the risk-free rate of return in the economy as a whole (r), the probability that the option will be exercised (N), and—the crucial variable—the expected volatility of the stock, i.e., the likely fluctuations of its price between the time of purchase and the expiration date (s). With wonderful mathematical wizardry, the quants reduced the price of a call option to this formula (the Black-Scholes formula).

Feeling a bit baffled? Can’t follow the algebra? That was just fine by the quants. To make money from this magic formula, they needed markets to be full of people who didn’t have a clue about how to price options but relied instead on their (seldom accurate) gut instincts. They also needed a great deal of computing power, a force which had been transforming the financial markets since the early 1980s. Their final requirement was a partner with some market savvy in order to make the leap from the faculty club to the trading floor. Black, who would soon be struck down by cancer, could not be that partner. But John Meriwether could. The former head of the bond-arbitrage group at Salomon Brothers, Meriwether had made his first fortune in the wake of the S&L meltdown of the late 1980s. The hedge fund he created with Scholes and Merton in 1994 was called Long-Term Capital Management.

In its brief, four-year life, Long-Term was the brightest star in the hedge-fund firmament, generating mind-blowing returns for its elite club of investors and even more money for its founders. Needless to say, the firm did more than just trade options, though selling puts on the stock market became such a big part of its business that it was nicknamed “the central bank of volatility” by banks buying insurance against a big stock-market sell-off. In fact, the partners were simultaneously pursuing multiple trading strategies, about 100 of them, with a total of 7,600 positions. This conformed to a second key rule of the new mathematical finance: the virtue of diversification, a principle that had been formalized by Harry M. Markowitz, of the Rand Corporation. Diversification was all about having a multitude of uncorrelated positions. One might go wrong, or even two. But thousands just could not go wrong simultaneously.

The mathematics were reassuring. According to the firm’s “Value at Risk” models, it would take a 10-s (in other words, 10-standard-deviation) event to cause the firm to lose all its capital in a single year. But the probability of such an event, according to the quants, was 1 in 10^24—or effectively zero. Indeed, the models said the most Long-Term was likely to lose in a single day was $45 million. For that reason, the partners felt no compunction about leveraging their trades. At the end of August 1997, the fund’s capital was $6.7 billion, but the debt-financed assets on its balance sheet amounted to $126 billion, a ratio of assets to capital of 19 to 1.

There is no need to rehearse here the story of Long-Term’s downfall, which was precipitated by a Russian debt default. Suffice it to say that on Friday, August 21, 1998, the firm lost $550 million—15 percent of its entire capital, and vastly more than its mathematical models had said was possible. The key point is to appreciate why the quants were so wrong.

The problem lay with the assumptions that underlie so much of mathematical finance. In order to construct their models, the quants had to postulate a planet where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximize profits; where they never stopped trading; where markets were continuous, frictionless, and completely liquid. Financial markets on this planet followed a “random walk,” meaning that each day’s prices were quite unrelated to the previous day’s, but reflected no more and no less than all the relevant information currently available. The returns on this planet’s stock market were normally distributed along the bell curve, with most years clustered closely around the mean, and two-thirds of them within one standard deviation of the mean. On such a planet, a “six standard deviation” sell-off would be about as common as a person shorter than one foot in our world. It would happen only once in four million years of trading.

But Long-Term was not located on Planet Finance. It was based in Greenwich, Connecticut, on Planet Earth, a place inhabited by emotional human beings, always capable of flipping suddenly and en masse from greed to fear. In the case of Long-Term, the herding problem was acute, because many other firms had begun trying to copy Long-Term’s strategies in the hope of replicating its stellar performance. When things began to go wrong, there was a truly bovine stampede for the exits. The result was a massive, synchronized downturn in virtually all asset markets. Diversification was no defense in such a crisis. As one leading London hedge-fund manager later put it to Meriwether, “John, you were the correlation.”

There was, however, another reason why Long-Term failed. The quants’ Value at Risk models had implied that the loss the firm suffered in August 1998 was so unlikely that it ought never to have happened in the entire life of the universe. But that was because the models were working with just five years of data. If they had gone back even 11 years, they would have captured the 1987 stock-market crash. If they had gone back 80 years they would have captured the last great Russian default, after the 1917 revolution. Meriwether himself, born in 1947, ruefully observed, “If I had lived through the Depression, I would have been in a better position to understand events.” To put it bluntly, the Nobel Prize winners knew plenty of mathematics but not enough history.

These episodes should remind us of the fragility of our current situation.  Indeed, as one of many potential future scenarios, Ferguson candidly discusses the prospects for a serious breakdown in relations between China and the west, akin to the breakdown of relations that precipitated the First World War.

In summary: I recommend this book, not only because it is full of intriguing anecdotes, but because it will help to raise awareness of the complex impacts of financial systems.  It will help boost general literacy about all aspects of money – and should, therefore, help us to be more effective in how collectively manage financial innovation.

Note: There are two editions of this book: one released in 2008, and one released in 2009.  The latter has a fuller account of the recent global financial crisis, and for that reason, is the better one to read.

1 October 2008

The student syndrome

Filed under: Agile, critical chain, Essay contest, predictability — David Wood @ 5:13 pm

Entries for Symbian’s 2008 Student Essay Contest have just closed. The deadline for submission of entries was midnight (GMT) on 30 September 2008.

The contest has been advertised since June. What proportion of all the entries do you suppose were submitted in the final six hours before the deadline expired? (Bear in mind that, out of a total competition duration of more than three months, six hours is about 1/400 of the available time.)

I’ll give the answer at the end of this article. It surprised me – though I ought to have anticipated the outcome. After all, for many years I’ve been telling people about “The Student Syndrome”.

I became familiar with the concept of the student syndrome some years ago, while reading Eliyahu Goldratt’s fine business-oriented novel “The Critical Chain“:

Like all Goldratt’s novels, Critical Chain mixes human interest with some intriguing ways of analysing business-critical topics. The ideas in these books had a big influence on the evolution of my own views about how to incorporate responsiveness and agility into large software projects where customers are heavily reliant on the software being delivered at pre-agreed dates.

Here’s what I said on the topic of “variable task estimates” in the chapter “Managing plans and change” in my own 2005 book “Symbian for software leaders“:

A smartphone project plan is made up from a large number of estimates for how long it will take to complete individual tasks. If the task involves novel work, or novel circumstances, or a novel integration environment, you can have a wide range of estimates for the length of time required.

It’s similar to estimating how long you will take to complete an unfamiliar journey in a busy city with potentially unreliable transport infrastructure. Let’s say that, if you are lucky, you might complete the journey in just 20 minutes. Perhaps 30 minutes is the most likely time duration. But in view of potential traffic hold-ups or train delays, you could take as long as one hour, or (in case of underground train derailments) even two hours or longer. So there’s a range of estimates, with the distribution curve having a long tail on the right hand side: there’s a non-negligible probability that the task will take at least twice as long as the individual most likely outcome.

It’s often the same with estimating the length of time for a task within a project plan.

Now imagine that the company culture puts a strong emphasis on fulfilling commitments, and never missing deadlines. If developers are asked to state a length of time in which they have (say) 95% confidence they will finish the task, they are likely to give an answer that is at least twice as long as the individual most likely outcome. They do so because:

  • Customers may make large financial decisions dependent on the estimate – on the assumption that it will be met;
  • Bonus payments to developers may depend on hitting the target;
  • The developers have to plan on unforeseen task interference (and other changes);
  • Any estimate the developers provide may get squashed down by aggressive senior managers (so they’d better pad their estimate in advance, making it even longer).

Ironically, even though such estimates are designed to be fulfilled around 95% of the time, they typically end up being fulfilled only around 50% of the time. This fact deserves some careful reflection. Even though the estimates were generous, it seems (at first sight) that they were not generous enough. In fact, here’s what happens:

  • In fulfilment of “Parkinson’s Law”, tasks expand to fill the available time. Developers can always find ways to improve and optimise their solutions – adding extra test cases, considering alternative algorithms and generalisations, and so forth;
  • Because there’s a perception (in at least the beginning of the time period) of there being ample time, developers often put off becoming fully involved in their tasks. This is sometimes called “the student syndrome”, from the observation that most students do most of the preparation for an exam in the time period just before the exam. The time lost in this way can never be regained;
  • Because there’s a perception of there being ample time, developers can become involved in other activities at the same time. However, these other activities often last longer than intended. So the developer ends up multi-tasking between two (or more) activities. But multi-tasking involves significant task setup time – time to become deeply involved in each different task (time to enter “flow mode” for the task). So yet more time is wasted;
  • Critically, even when a task is ready to finish earlier than expected, the project plan can rarely take advantage of this fact. The people who were scheduled for the next task probably aren’t ready to start it earlier than anticipated. So an early finish by one task rarely translates into an early start by the next task. On the other hand, a late finish by one task inevitably means a late start for the next start. This task asymmetry drives the whole schedule later.

In conclusion, in a company whose culture puts a strong emphasis upon fulfilling commitments and never missing deadlines, the agreed schedules are built from estimations up to twice as long as the individually most likely outcome, and even so, they often miss even these extended deadlines…

This line of analysis is one I’ve run through scores of times, in discussions with people, in the last four or five years. It feeds into the argument that the best way to ensure customer satisfaction and predictable delivery, is, counter-intuitively, to focus more on software quality, interim customer feedback, agile project management, self-motivated teams, and general principles of excellence in software development, than on schedule management itself.

It’s in line with what Steve McConnell says,

  • IBM discovered 20 years ago that projects that focused on attaining the shortest schedules had high frequencies of cost and schedule overruns;
  • Projects that focused on achieving high quality had the best schedules and the highest productivities.

Symbian’s experience over many years bears out the same conclusion. The more we’ve focused on achieving high quality, the better we’ve become with both schedule management and internal developer productivity.

As for the results of the student syndrome applied to the Symbian Essay Contest:

  • 54% of the essays submitted to the competition were received in the final six hours (approximately the final 1/400 of the time available)
  • Indeed, 16% of the essays submitted were received in the final 60 minutes.

That’s an impressively asymmetric distribution! (It also means that the competition judges will have to work harder than they had been expecting, right up to the penultimate day of the contest…)

Blog at WordPress.com.