dw2

14 January 2012

Speaking of angels – visions of a world beyond

Filed under: books, irrationality, magic, paranormal, psychology — David Wood @ 1:03 am

How open-minded are you?

  • Suppose someone you’ve never met before takes a look at the palm of your hand, and shortly afterwards tells you surprising things about yourself – for example, about private issues experienced by your family, that no one else knows about.  What would your reaction be?
  • Or consider the case of people apparently leaving their bodies, whilst near death, and travelling around the neighbourhood in an out-of-body experience, observing hidden details that could only be noticed by someone high up in the sky.  Isn’t that thought-provoking?
  • Or what about reliable, trustworthy witnesses who return from spiritualist seances reporting materialisations and apparitions that the best conjurors of the day realise they could not possibly duplicate?
  • What about a president of the United States (Abraham Lincoln) who dreamed the details of his own death, in a precognition, several weeks ahead of that dreadful event?
  • What about someone who can cause the pages of a bible in another part of the room to turn over?  Or pencils to rotate?  Or solid steel spoons to bend and break?
  • Finally, what about a dog which springs to the window, seemingly knowing in advance that their owner has set off from work to return home, and will shortly be arriving at the house?

All these phenomena, and a lot more like them, are described in Professor Richard Wiseman’s recent book, “Paranormality: Why we see what isn’t there“.

At face value, these phenomena testify to the presence of powers far beyond the present understanding of science.  They suggest the existence of some kind of angelic realm, in which information can travel telepathically, from one brain to another, and even backwards in time.

One common reaction to this kind of report is to cough in embarrassment, or make a joke, and move on to another topic.

Another reaction is to become a debunker.  Indeed, Wiseman’s book contains some splendid debunking.  I won’t spoil the fun by sharing these details here, but you can bear in mind the apparently miraculous feats demonstrated right in front of spectators’ eyes by magicians like Derren Brown or “Dynamo“.  (As noted on his website, Wiseman “started his working life as a professional magician, and is a Member of the Inner Magic Circle”.)

However, “Paranormality” goes far beyond debunking.  Although some of the apparently paranormal events do have mundane explanations, for others, the explanation is more wonderful.  These explanations reveal fascinating details about the way the human mind operates – details that have only come to be understood within recent years.

These explanations don’t involve any actual transfer of disembodied thought, or any transcendent angelic realm.  Instead, they shed light on topics such as:

  • Circumstances when the mind can become convinced that it is located outside the body
  • Ways to pick up subliminal cues, by which people “leak” information to one another via subtle movements
  • The sometimes spectacular unreliability of human memory
  • Cognitive dissonance – how people react when, on the surface, prophetic statements have proven false
  • The functioning of dreams, linked to sleep paralysis
  • Circumstances when people feel that there’s a ghostly presence
  • Purposeful movements made by the body, without the awareness of the conscious mind
  • Limitations in the mind’s concept that it has free will.

The book also retells some dramatic historical episodes.  Some of these episodes were already familiar to me, from my days doing postgraduate research in the philosophy of science, when I looked hard and long at the history of research into the paranormal.  Others were, I confess, new to me – including an account of Michael Faraday’s investigation of the mechanics behind table-turning at seances.

The book has many practical tips too:

  • How to develop the habit of “lucid dreams” (when you’re aware that you’re dreaming)
  • How to impress people that you can (apparently) read their mind and discern hidden depths of their character
  • How to distract an audience, so that they fail to notice what’s right in front of them
  • How to organise a group of people around a table, so that the table apparently starts moving of its own volition
  • How to avoid losing control of your mind in circumstances when powerful persuasive influences operate.

In other words, rather than dismissing instances of apparent paranormal occurrences as being inevitably misguided, Wiseman suggests there’s a lot to learn from them.

I expect to hear more of the same theme later today, at the “Centre for Inquiry UK” event “Beyond the Veil – a closer look at spirits, mediums and ghosts“.  This is being held at London’s Conway Hall (one of my own favourite London venues).  Richard Wiseman is one of the speakers there.  The full agenda is as follows:

10.30 Registration (tickets will be available at the door)

11.00: Spirits on the brain: Insights from psychology and neuroscience – Chris French, Professor of Psychology and Head of the Anomalistic Psychology Research Unit at Goldsmiths, University of London

12.00: ‘Is there anybody there?’ – Hayley Stevens, a ghost hunter that doesn’t hunt for ghosts, who has been researching paranormal reports since 2005.

13.00: Lunch break

13.30: Mediums at Large – Paul Zenon, a professional trickster for almost thirty years, during which period he has appeared countless times as performer, presenter and pundit on numerous TV shows

14.00: Parnormality – Richard Wiseman, Professor for the Public Understanding of Psychology at the University of Hertfordshire

15.00: You Are The Magic – Ian Rowland, writer and entertainer with an interest in various aspects of how the mind works or sometimes doesn’t, who taught FBI agents how to be persuasive, and taught Derren Brown how to read fortunes

16.00: End

Postscript: Wiseman’s book contains a number of 2D barcodes.  The book suggests that readers should point their smartphones at these barcodes.  Their smartphones will then be redirected to short related movies on a special website, such as this one.  It was a pleasant surprise to be reminded of the utility of smartphones while my mind was engrossed in reflections of psychology.

1 January 2012

Planning for optimal ‘flow’ in an uncertain world

Filed under: Agile, books, critical chain, flow, lean, predictability — David Wood @ 1:44 pm

In a world with enormous uncertainty, what is the best planning methodology?

I’ve long been sceptical about elaborate planning – hence my enthusiasm for what’s often called ‘agile‘ and ‘lean‘ development processes.  Indeed, I devoted a significant chunk of my book “Symbian for software leaders – principles of successful smartphone development projects” to comparing and contrasting the “plan is king” approach to an agile approach.

But the passage of time accumulates deeper insight.  Key thinkers in this field now refer to “second generation lean product development”.  Perhaps paramount among these thinkers is the veteran analyst of best practice in new product development, Donald Reinertsen.  I’ve been influenced by his ideas more than once in my career already:

  • In the early 1990s, while I was a software engineering manager at Psion, my boss at the time recommended I read Reinertsen’s “Developing Products in Half the Time“. It was great advice!
  • In the early 200xs, while I was EVP at Symbian, I remember enjoying insights from Reinsertsen’s “Managing the Design Factory“.

I was recently pleased to discover Reinertsen has put pen to paper again.  The result is “The Principles of Product Development Flow: Second Generation Lean Product Development“.

The following Amazon.com review of the latest book, by Maurice Hagar, persuaded me to purchase that book:

This new standard on lean product and software development challenges orthodox thinking on every side and is required reading. It’s fairly technical and not an easy read but well worth the effort.

For the traditionalist, add to cart if you want to learn:

  • Why prioritizing work “on the basis of project profitability measures like return on investment (ROI)” is a mistake
  • Why we should manage queues instead of timelines
  • Why “trying to estimate the amount of work in queue” is a waste of time
  • Why our focus on efficiency, capacity utilization, and preventing and correcting deviations from the plan “are fundamentally wrong”
  • Why “systematic top-down design of the entire system” is risky
  • Why bottom-up estimating is flawed
  • Why reducing defects may be costing us money
  • Why we should “watch the work product, not the worker”
  • Why rewarding specialization is a bad idea
  • Why centralizing control in project management offices and information systems is dangerous
  • Why a bad decision made rapidly “is far better” than the right decision made late and “one of the biggest mistakes a leader could make is to stifle initiative”
  •  Why communicating failures is more important than communicating successes

For the Agilist, add to cart if you want to learn:

  • Why command-and-control is essential to prevent misalignment, local optimization, chaos, even disaster
  • Why traditional conformance to a plan and strong change control and risk management is sometimes preferable to adaptive management
  • Why the economies of scale from centralized, shared resources are sometimes preferable to dedicated teams
  • Why clear roles and boundaries are sometimes preferable to swarming “the way five-year-olds approach soccer”
  • Why predictable behavior is more important than shared values for building trust and teamwork
  • Why even professionals should have synchronized coffee breaks…

Even in the first few pages, I’ve found some cracking good quotes.

Here’s one on economics and “the cost of late changes”:

Our central premise is that we do product development to make money.  This economic goal permits us to use economic thinking and allows us to see many issues with a fresh point of view.  It illuminates the grave problems with the current orthodoxy.

The current orthodoxy does not focus on understanding deeper economic relationships.  Instead, it is, at best, based on observing correlations between pairs of proxy variables.  For example, it observes that late design changes have higher costs than early design changes, and prescribes front-loading problem solving.  This ignores the fact that late changes can also create enormous economic value.  The economic effect of a late change can only be evaluated by considering its complete economic impact.

And on “worship of conformance”:

In addition to deeply misunderstanding variability, today’s product developers have deep-rooted misconceptions on how to react to this variability.  They believe that they should always strive to make actual performance conform to the original plan.  They assume that the benefit of correcting a deviation from the plan will always exceed the cost of doing so.  This places completely unwarranted trust in the original plan, and it blocks companies from exploiting emergent opportunities.  Such behaviour makes no economic sense.

We live in an uncertain world.  We must recognise that our original plan was based on noisy data, viewed from a long time-horizon…  Emergent information completely changes the economics of our original choice.  In such cases, blindly insisting on conformance to the original plan destroys economic value.

To manage product development effectively, we must recognise that valuable new information is constantly arriving throughout the development cycle.  Rather than remaining frozen in time, locked to the original plan, we must learn to make good economic choices using this emerging information.

Conformance to the original plan has become another obstacle blocking our ability to make good economic choices.  Once again, we have a case of a proxy variable, conformance, obscuring the real issue, which is making good economic decisions…

Next, on flow control and the sequencing of tasks:

We are interested in finding economically optimum sequences for tasks.  Current practices use fairly crude approaches to sequencing.

For example, it suggests that if subsystem B depends on subsystem A, it would be better to sequence the design of A first.  This logic optimises efficiency as a proxy variable.  When we consider overall economics, as we do in this book, we often reach different conclusions.  For example, it may be better to develop both A and B simultaneously, despite the risk of inefficient rework, because parallel development can save cycle time.

In this book, our model for flow control will not be manufacturing systems, since these systems primarily deal with predictable and homogeneous flows.  Instead, we will look at lessons that can be learned from telecommunications networks and computer operating systems.  Both of these domains have decades of experience dealing with non-homogeneous and highly variable flows.

Finally, on fast feedback:

Developers rely on feedback to influence subsequent choices.  Or, at least, they should.  Unfortunately, our current orthodoxy views feedback as an element of an undesirable rework loop.  It asserts that we should prevent the need for rework by having engineers design things right the first time.

We will present a radically different view, suggesting that feedback is what permits us to operate our product development process effectively in a very noisy environment.  Feedback allows us to efficiently adapt to unpredictability.

To be clear, Reinertsen’s book doesn’t just point out issues with what he calls “current practice” or “orthodoxy”.  He also points out shortcomings in various first generation lean models, such as Eliyahu Goldratt’s “Critical Chain” methodology (as described in Goldratt’s “Theory of Constraints”), and Kanban.  For example, in discussing the minimisation of Work In Process (WIP) inventory, Reinertsen says the following:

WIP constraints are a powerful way to gain control over cycle time in the presence of variability.  This is particularly important where variability accumulates, such as in product development…

We will discuss two common methods of constraining WIP: the kanban system and Goldratt’s Theory of Constraints.  These methods are relatively static.  We will also examine how telecommunications networks use WIP constraints in a much more dynamic way.  Once again, telecommunications networks are interesting to us as product developers, because they deal successfully with inherently high variability.

Hopefully that’s a good set of tasters for what will follow!

30 December 2011

2012 resolution resolution

Filed under: books, psychology — David Wood @ 6:46 pm

It’s the season for new year’s resolutions.  But before composing a new year’s resolution list, some questions:

  • How important is resolve?
  • Should we prioritise self-control?
  • Does willpower matter?

In their recent book “Willpower – rediscovering the greatest human strength“, pioneering psychology researcher Roy F. Baumeister and New York Times science writer John Tierney have a great many positive things to say about willpower and self-control.  Their analysis provides a timely counterbalance in a world that is generally suspicious of thrift and self-denial, and that tends, instead, to value “self-esteem”, “anything goes”, and “if it feels good, do it”.

I consider this to be a very practical book, on a topic that has been overlooked for too long.

Early in the book, the authors provide this summary of recent changed opinions within social science research:

Baumeister and his colleagues around the world have found that improving willpower is the surest way to a better life.

They’ve come to realise that most major problems, personal and societal, centre on failure of self-control: compulsive spending and borrowing, impulsive violence, under achievement in school, procrastination at work, alcohol and drug abuse, unhealthy diet, lack of exercise, chronic anxiety, explosive anger.

Poor self-control correlates with just about every kind of individual trauma: losing friends, being fired, getting divorced, winding up in prison.  It can cost you the US Open, as Serena Williams’s tantrum in 2009 demonstrated; it can destroy your career, as adulterous politicians keep discovering.  It contributed to the epidemic of risky loans and investments that devastated the financial system, and to the shaky prospects for so many people who failed (along with their political leaders) to set aside enough money for their old age…

People feel overwhelmed because there are more temptations than ever.  Your body may have dutifully reported to work on time, but your mind can escape at any instant through the click of a mouse or a phone.  You can put off any job by checking email or Facebook, surfing gossip sites, or playing a video game…  You can do enough damage in a ten-minute online shopping spree to wreck your budget for the rest of the year.  Temptations never cease…

The book contains very interesting reports of how well-known people nurtured stronger willpower – such as the magician and “endurance artist” David Blaine, the 19th century explorer Henry Stanley Morton, personal effectiveness pioneer Benjamin Franklin, and recovering alcoholics such as guitarist Eric Clapton.   It also summarises the results of numerous psychology experiments.  There’s lots of practical advice:

  1. Willpower gets depleted over time; however, supplies of willpower can be replenished by food and rest
  2. Self-control exercised in one region of our life (e.g. to resist eating tempting food) depletes the immediate store of willpower we have for other regions of our life (e.g. not to lose our temper); we don’t have separate supplies of different kinds of willpower
  3. The same observation has a positive side to it too: exercising willpower in some areas of life, and building greater stamina there (over time) – for example, sustained piano practice, or a discipline of meditation or prayer – typically builds better willpower (over time) in other areas too
  4. Temporary reserves of willpower can be reinstated by eating foods that provide a quick release of sugar – though a more sustainable longer term approach is to eat healthily on a regular basis
  5. Willpower can also be augmented when we have better feedback on what we are doing – for example, when we see ourselves in a mirror, or when we record aspects of our health daily (such as our weight), or when a trusted friend or colleague is aware of our goals and discusses our progress with us
  6. Willpower can also be augmented when we see our efforts as fitting into a larger framework or community, which can be seen as a “higher power” – such as a religious, political, or humanitarian cause
  7. The best use of willpower is to design our lives to minimise the impact of potential distractions and temptations.  This includes the above advice on healthy eating, adequate rest, as well as having a less cluttered life.

To elaborate the final point, here’s a summary of some research described in the final chapter of the book:

Researchers were surprised to find that people with strong self-control spent less time resisting desires than other people did…  Self-control is supposedly for resisting desires, so why are the people who have more self-control not using it more often…?

But then an explanation emerged.  These people have less need to use willpower because they’re beset by fewer temptations and inner conflicts.  They’re better at arranging their lives so that they avoid problem situations…

People with good self-control mainly use it not for rescue in emergencies but rather to develop effective habits and routines in school and in work…  They use their self-control not to get through crises but to avoid them.  They give themselves enough time to finish a project; they take the car to the shop before it breaks down; they stay away from all-you-can-eat buffets.  They play offense instead of defence…

The advice on having a less cluttered life applies to the set of goals we set ourselves.  Baumeister and Tierney are not keen on lengthy lists of new year’s resolutions.  Worrying about goal number 4 on the list, for example, is likely to limit our ability to concentrate on goal number 2 on the list:

The first step in self-control is to set a clear goal.  Self-control without goals or other standards would be nothing more than aimless changes, like trying to diet without any idea of which foods are fattening.

For most of us, though, the problem is not a lack of goals but rather too many of them.  We make daily to-do lists that couldn’t be accomplished even if there were no interruptions during the day, which there always are.  By the time the weekend arrives, there are more unfinished tasks than ever, but we keep deferring them and expecting to get through them with miraculous speed.  That’s why, as productivity experts have found, an executive’s daily to-do list for Monday often contains more work than could be done the entire week.

Worse, there are often latent conflicts between different goals.  With too many goals:

  • People worry too much – the more competing demands someone faces, the more time they spend contemplating these demands
  • People get less done – they replace action with rumination
  • People’s health suffers, physically as well as mentally; they paid a high price for too much brooding.

For this reason, even before I get to my own list of new year’s resolutions, I know that the underlying principle is going to be:

  • Do less, in order to make a better job of the things that matter most.

That’s my 2012 “resolution resolution”.

27 July 2011

Eclectic guidance for big life choices

Filed under: books, challenge, Economics, evolution, leadership, market failure, psychology, risks, strategy — David Wood @ 10:34 pm

“If you’re too busy to write your normal blog posts, at least tell us what books you’ve liked reading recently.”

That’s a request I’ve heard in several forms over the last month or so, as I’ve been travelling widely on work-related assignments.  On these travels, I’ve met several people who were kind enough to mention that they enjoyed reading my blog posts – especially those postings recommending books to read.

In response to this suggestion, let me highlight four excellent books that I’ve read recently, which have each struck me as having something profound to say on the Big Topic of how to make major life choices.

Adapt: Why Success Always Starts with Failure, by Tim Harford

Adapt: Why Success Always Starts with Failure draws out all sorts of surprising “aha!” connections between different areas of life, work, and society.  The analysis ranges across the wars in Iraq, the comparative strengths and weaknesses of Soviet-style centrally planned economies, the unorthodox way the development of the Spitfire fighter airplane was funded, the “Innovator’s Dilemma” whereby one-time successful companies are often blindsided by emerging new technologies, different approaches to measuring the effectiveness of charitable aid donations, the risk of inadvertently encouraging perverse behaviours when setting grand over-riding incentives, the over-bearing complexity of modern technology, the causes of the great financial crash of 2008-2009, reasons why safety systems break down, approaches to tackling climate change, and the judicious use of prizes to encourage successful breakthrough innovation.  Yes, this is a real intellectual roller-coaster, with some unexpected twists along the way – revelations that had me mouthing “wow, wow” under my breath.

And as well as heroes, there are villains.  (Donald Rumsfeld comes out particularly badly in these pages – even though he’s clearly in some ways a very bright person.  That’s an awful warning to the others among us who rejoice in above-average IQs.)

The author, Tim Harford, is an economist, but this book is grounded in observations about Darwinian evolution.  Three pieces of advice pervade the analysis – advice that Harford dubs “Palchinsky Principles”, in honour of Peter Palchinsky, a Russian mining engineer who was incarcerated and executed by Stalin’s government in 1929 after many years of dissent against the human cost of the Soviet top-down command and control approach to industrialisation.  These principles are designed to encourage stronger innovation, better leadership, and more effective policies, in the face of complexity and unknowns.  The principles can be summarised as follows:

  1. Variation – seek out new ideas and try new ideas
  2. Survivability – when trying something new, do it on a scale where failure is survivable
  3. Selection – seek out feedback and learn from mistakes as you go along, avoiding an instinctive reaction of denial.

Harford illustrates these principles again and again, in the context of the weighty topics already listed, including major personal life choices as well as choices for national economies and international relations.  The illustrations are full of eye-openers.  The book’s subtitle is a succinct summary: “success always stars with failure”.  The notion that it’s always possible to “get it right the first time” is a profound obstacle to surviving the major crises that lie ahead of us.  We all need a greater degree of openness to smart experimentation and unexpected feedback.

The Moral Landscape: How Science Can Determine Human Values, by Sam Harris

That thought provides a strong link to the second book I wish to mention: The Moral Landscape: How Science Can Determine Human Values.  It’s written by Sam Harris, who I first came to respect when I devoured his barnstorming The End of Faith: Religion, Terror, and the Future of Reason a few years ago.

In some ways, the newer book is even more audacious.  It considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion.  The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith.  It is a divine being that tells us, directly or indirectly, the difference between good and evil.  There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy.  It is by reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method.  Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being.  Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil.  It’s no defense of an action that it makes sense within an archaic, pre-scientific view of the world – a view in which misfortunes are often caused by witches’ spells, angry demons, or spiteful disembodied minds.

Here, “science” means more than the findings of any one branch of science, whether that is physics, biology, psychology, or sociology.  Instead, it is the general disciplined outlook on life that seeks to determine objective facts and connections, and which is open to making hypotheses, gathering data in support of these hypotheses, and refining hypotheses in the light of experimental findings.  As science finds out more about the causes of human well-being in a wide variety of circumstances, we can speak with greater confidence about matters which, formerly, caused people to defer to either religion or philosophy.

Unsurprisingly, the book has stirred up a raucous hornet’s nest of criticism.  Harris addresses most of these criticisms inside the book itself (which suggests that many reviewers were failing to pay attention) and picks up the discussion again on his blog. He summarises his view as follows:

Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena… fully constrained by the laws of Nature (whatever these turn out to be in the end). Therefore, there must be right and wrong answers to questions of morality and values that potentially fall within the purview of science. On this view, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.

As Harris makes clear, this is far from being an abstract, other-worldly discussion.  Cultures are clashing all the time, with lots of dramatic consequences for human well-being.  Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)?  And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections (investigations such as those proposed, indeed,  by Tim Harford in “Adapt”)?  In the light of these questions, here are some arguments that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.”  (I especially recommend Harris’s excoriating demolition of surprisingly spurious arguments given by Francis Collins in his surprisingly widely respected book “The Language of God: A Scientist Presents Evidence for Belief“.)

Mindsight: The New Science of Personal Transformation, by Daniel Siegel

The next book on my list serves as a vivid practical illustration of the kind of scientifically-informed insight that Harris talks about – new insight about connections between the brain and mental well-being.  Mindsight: The New Science of Personal Transformation contains numerous case histories of people who:

  • Started off lacking one or more elements of mental well-being
  • Became a patient of the author, Dr Daniel Siegel – a Harvard-trained physician
  • Followed one or other program of mindfulness – awareness and monitoring of the patterns of energy and information flowing in the brain
  • Became more integrated and fulfilled as a result.

To quote from the book’s website:

“Mindsight” [is] the potent skill that is the basis for both emotional and social intelligence. Mindsight allows you to make positive changes in your brain–and in your life.

  • Is there a memory that torments you, or an irrational fear you can’t shake?
  • Do you sometimes become unreasonably angry or upset and find it hard to calm down?
  • Do you ever wonder why you can’t stop behaving the way you do, no matter how hard you try?
  • Are you and your child (or parent, partner, or boss) locked in a seemingly inevitable pattern of conflict?

What if you could escape traps like these and live a fuller, richer, happier life?  This isn’t mere speculation but the result of twenty-five years of careful hands-on clinical work by Daniel J. Siegel, M.D… one of the revolutionary global innovators in the integration of brain science into the practice of psychotherapy. Using case histories from his practice, he shows how, by following the proper steps, nearly everyone can learn how to focus their attention on the internal world of the mind in a way that will literally change the wiring and architecture of their brain.

Siegel is, of course, aware that drugs can often play a role in addressing mental issues.  However, his preference in many cases is for patients to learn and practice various skills in mental introspection.  His belief – which he backs up by reference to contemporary scientific findings – is that practices such as meditation can change the physical structure of brain in significant ways.  (And there are times when it can relieve recurring back pain too, as in one case history covered.)

Siegel defines the mind as “an embodied and relational process that regulates the flow of energy and information”.  He goes on to say:

So how would you regulate the mind?  By developing the ability to see mental activity with more clarity and then modify it with more effectiveness… there’s something about being able to see and influence your internal world that creates more health.

Out of the many books on psychotherapy that I’ve read over the years, this is one of the very best.  The case studies are described in sufficient depth to make them absorbing.  They’re varied, as well as unpredictable.  The neuroscience in the book is no doubt simplified at times, but gels well with what I’ve picked up elsewhere.  And the repeated emphasis on “integration” provides a powerful unifying theme:

[Integration is] a process by which separate elements are linked together into a working whole…  For example, integration is at the heart of how we connect to one another in healthy ways, honoring one another’s differences while keeping our lines of communication wide open. Linking separate entities to one another—integration—is also important for releasing the creativity that emerges when the left and right sides of the brain are functioning together.

Integration enables us to be flexible and free; the lack of such connections promotes a life that is either rigid or chaotic, stuck and dull on the one hand or explosive and unpredictable on the other. With the connecting freedom of integration comes a sense of vitality and the ease of well-being. Without integration we can become imprisoned in behavioral ruts—anxiety and depression, greed, obsession, and addiction.

By acquiring mindsight skills, we can alter the way the mind functions and move our lives toward integration, away from these extremes of chaos or rigidity. With mindsight we are able to focus our mind in ways that literally integrate the brain and move it toward resilience and health.

The sections in the book on meditation are particularly interesting.  As Siegel has become aware, the techniques he recommends have considerable alignment with venerable practices from various eastern traditions – such as the practice of “mindfulness“.  However, the attraction of these techniques isn’t that they are venerable.  It is that there’s a credible scientific explanation of why they work – an explanation that is bolstered by contemporary clinical experience.

Good Strategy Bad Strategy: The Difference and Why It Matters, by Richard Rumelt

From a great book on psychotherapy, let me finish by turning to a great book on strategy – perhaps the best book on strategy that I’ve ever read: Good Strategy Bad Strategy: The Difference and Why It Matters.  The author, Richard Rumelt, Professor of Business and Society at UCLA Anderson School of Management, is a veteran analyst of strategy, who gained his first degree as long ago as 1963 (in Electrical Engineering from the University of California, Berkeley).  He speaks with an accumulated lifetime of wisdom, having observed countless incidents of both “bad strategy” and “good strategy” over five decades of active participation in industry.

“Strategy” is the word which companies often use, when justifying their longer term actions.  They do various things, they say, in pursuit of their strategic objectives.  Here, “strategy” goes beyond “business case”.  Strategy is a reason for choosing between different possible business cases – and can provide reasons for undertaking projects even in the absence of a strong business case.  By the way, it’s not just companies that talk about strategy.  Countries can have them too, as well as departments within governments.  And the same applies to individuals: someone’s personal strategy can be an explicit reason for them choosing between different possible alternative courses of action.

It’s therefore a far from ideal situation that much of what people think of as a strategy is instead, in Rumelt’s words, “fluff” or “wishful thinking”:

It’s easy to tell a bad [strategy] from a good one. A bad one is full of fluff: fancy language covering up the lack of content. Enron’s so-called strategy was littered with meaningless buzzwords explaining its aim to evolve to a state of “sophisticated value extraction”. But in reality its chief strategies could be summed up as having an electronic trading platform, being an over-the-counter broker and acting as an information provider. These are not strategies, they are just names, like butcher, baker and candlestick maker…

Bad strategy is long on goals and short on policy or action.  It assumes that goals are all you need.  It puts forward strategic objectives that are incoherent and, sometimes, totally impractical.  It uses high-sounding words and phrases to hide these failings…

The core of [good] strategy work is always the same: discovering the critical factors in a situation and designing a way of coordinating and focusing actions to deal with those factors…

Bad strategy tends to skip over pesky details such as problems.  It ignores the power of choice and focus, trying instead of accommodate a multitude of conflicting demands and interests.  Like a quarterback whose only advice to teammates is “Let’s win”, bad strategy covers up its failure to guide by embracing the language of broad goals, ambition, vision, and values.  Each of these elements is, of course, an important part of human life.  But, by themselves, they are not substitutes for the hard work of strategy…

If you fail to identify and analyse the obstacles, you don’t have a strategy.  Instead, you have either a stretch goal, a budget, or a list of things you wish would happen.

The mention of a specific company above – Enron – is an example of a striking pattern Rumelt follows throughout his book: he names guilty parties.  Other “guilty parties” identified in the midst of fascinating narratives include CEOs of Lehman Brothers, International Harvester, Ford Motor Company, DEC, Telecom Italia, and metal box manufacturer Crown Cork & Seal.

Individuals that are highlighted, in contrast, as examples of good strategy include titans from military history – General Norman Schwarzkopf, Admiral Nelson, Hannibal, and Hebrew shepherd boy David (in his confrontation with Goliath) – as well as industry figures such as Sam Walton, Steve Jobs, Intel’s Andy Grove, IBM’s Lou Gerstner, and a range of senior managers at Cisco.  The tales recounted are in many ways already well known, but in each case Rumelt draws out surprising insight.  (Rumelt’s extended account of Hannibal’s victory over the Roman army at Cannae in 216 BC indicates many unexpected implications.)

Why do so many companies, government departments, and individuals have “bad strategy”?  Rumelt identifies four underlying reasons:

  • A psychological unwillingness or inability to make choices (this can be linked with an organisation being too decentralised)
  • A growing tide of “template style” strategic planning, which gives too much attention to vision, mission, and values, rather than to hard analysis of a company’s situation
  • An over-emphasis on charismatic qualities in leaders
  • The superficially appealing “positive thinking” movement.

Rumelt’s treatment of “positive thinking” is particularly illuminating – especially for a reader like me who harbours many sympathies for the idea that it’s important to maintain a positive, upbeat attitude.  Rumelt traces the evolution of this idea over more than a century:

This fascination with positive thinking, and its deep connection to inspirational and spiritual thought, was invented around 150 years ago in New England as a mutation of Protestant Christian individualism…

The amazing thing about [the ideology of positive thinking] is that it is always presented as if it were new!  And no matter how many times the same ideas are repeated, they are received by many listeners with fresh nods of affirmation.  These ritual recitations obviously tap into a deep human capacity to believe that intensely focused desire is magically rewarded…

I do not know whether meditation and other inward journeys perfect the human soul.  But I do know that believing … that by thinking only of success you can become a success, is a form of psychosis and cannot be recommended as an approach to management or strategy.  All [good] analysis starts with the consideration of what might happen, including unwelcome events.  I would not care to fly in an aircraft designed by people who focused only on an image of a flying machine and never considered modes of failure…

The doctrine that one can impose one’s visions and desires on the world by thought alone retains a powerful appeal to many people.  Its acceptance displaces critical thinking and good strategy.

As well as pointing out flaws in bad strategy, Rumelt provides wide-ranging clear advice on what good strategy contains:

A good strategy works by harnessing power and applying it where it will have the greatest effect.  In the short term, this may mean attacking a problem or rival with adroit combinations of policy, actions, and resources.  In the longer term, it may involve cleverly using policies or resource commitments to develop capabilities that will be of value in future contests.  In either case, a “good strategy” is an approach that magnifies the effectiveness of actions by finding and using sources of power…

Strategic leverage arises from a mixture of anticipation, insight into what is most pivotal or critical in a situation, and making a concentrated application of effort…

A much more effective way to compete is the discovery of hidden power in the situation.

Later chapters amplify these ideas by providing many illuminating suggestions for how to build an effective strategy.  Topics covered include proximate objectives, chain-link systems, design, focus (“pivot points”), competitive advantage, anticipation and exploitation of industry trends (“dynamics”), and inertia and entropy.  Here are just a few illustrative snippets from these later chapters:

In building sustained strategic advantage, talented leaders seek to create constellations of activities that are chain-linked.  This adds extra effectiveness to the strategy and makes competitive imitation difficult…

Many effective strategies are more designs than decisions – are more constructed than chosen..

When faced with a corporate success story, many people ask, “How much of the success was skill and how much was luck?”  The saga of Cisco Systems vividly illustrates that the mix of forces is richer than just skill and luck.  Absent the powerful waves of change sweeping through computing and telecommunications, Cisco would have remained a small niche player.  Cisco’s managers and technologists were very skillful at identifying and exploiting these waves of change…

An organisation’s greatest challenge may not be external threats or opportunities, but instead the effects of entropy and inertia.  In such a situation, organisational renewal becomes a priority.  Transforming a complex organisation is an intensely strategic challenge.  Leaders must diagnose the causes and effects of entropy and inertia, create a sensible guiding policy for effecting change, and design a set of coherent actions designed to alter routines, culture, and the structure of power and influence.

You can read more on the book’s website.

The book is addressed to people working within organisations, with responsibility for strategy in these organisations.  However, most of the advice is highly valid for individuals too.  Are the big personal goals we set ourselves merely “wishful thinking”, or are they grounded in a real analysis of our own personal situation?  Do they properly take account of our personal trends, inertia, entropy, and sources of competitive power?

7 May 2011

Workers beware: the robots are coming

Filed under: books, challenge, disruption, Economics, futurist, robots — David Wood @ 9:07 pm

What’s your reaction to the suggestion that, at some stage in the next 10-30 years, you will lose your job to a robot?

Here, by the word “robot”, I’m using shorthand for “automation” – a mixture of improvements in hardware and software. The suggestion is that automation will continue to improve until it reaches the stage when it is cheaper for your employer to use computers and/or robots to do your job, than it is to continue employing you. This change has happened in the past with all manner of manual and/or repetitive work. Could it happen to you?

People typically have one of three reactions to this suggestion:

  1. “My job is too complex, too difficult, too human-intense, etc, for a robot to be able to do it in the foreseeable future. I don’t need to worry.”
  2. “My present job may indeed be outsourced to robots, but over the same time period, new kinds of job will be created, and I’ll be able to do one of these instead. I don’t need to worry.”
  3. “When the time comes that robots can do all the kinds of work that I can do, better than me, we’ll be living in an economy of plenty. I won’t actually need to work – I’ll be happy to enjoy lots more leisure time. I don’t need to worry.”

Don’t need to worry? Think again. That’s effectively the message in Martin Ford’s 2009 book “The lights in the tunnel“. (If you haven’t heard of that book, perhaps it’s because the title is a touch obscure. After all, who wants to read about “lights in a tunnel”?)

The subtitle gives a better flavour of the content: “Automation, accelerating technology, and the economy of the future“. And right at the top of the front cover, there’s yet another subtitle: “A journey to the economic landscape of the coming decades“. But neither of these subtitles conveys the challenge which the book actually addresses. This is a book that points out real problems with increasing automation:

  • Automation will cause increasing numbers of people to lose their current jobs
  • Accelerating automation will mean that robots can quickly become able to do more jobs – their ability to improve and learn will far outpace that of human workers – so the proportion of people who are unemployed will grow and grow
  • Without proper employment, a large proportion of consumers will be deprived of income, and will therefore lack the spending power which is necessary for the continuing vibrancy of the economy
  • Even as technology improves, the economy will stagnate, with disastrous consequences
  • This is likely to happen long before technologies such as nanotech have reached their full potential – so that any ideas of us existing at that time in an economy of plenty are flawed.

Although the author could have chosen a better title for his book, the contents are well argued, and easy to read. They deserve a much wider hearing.  They underscore the important theme that the process of ongoing technological improvement is far from being an inevitable positive.

There are essentially two core threads to the book:

  • A statement of the problem – this effectively highlights issues with each of the reactions 1-3 listed earlier;
  • Some tentative ideas for a possible solution.

The book looks backwards in history, as well as forwards to the future. For example, it includes interesting short commentaries on both Marx and Keynes. One of the most significant backward glances considers the case of the Luddites – the early 19th century manufacturing workers in the UK who feared that their livelihoods would be displaced by factory automation. Doesn’t history show us that such fears are groundless? Didn’t the Luddites (and their descendants) in due course find new kinds of employment? Didn’t automation create new kinds of work, at the same time as it destroyed some existing kinds of work? And won’t that continue to happen?

Well, it’s a matter of pace.  One of most striking pictures in the book is a rough sketch of the variation over time of the comparative ability of computers and humans to perform routine jobs:

As Martin Ford explains:

I’ve chosen an arbitrary point on the graph to indicate the year 1812. After that year, we can reasonably assume that human capability continued to rise quite steeply until we reach modern times. The steep part of the graph reflects dramatic improvements to our overall living conditions in the world’s more advanced countries:

  • Vastly improved nutrition, public health, and environmental regulations have allowed us to remain relatively free from disease and reach our full biological potential
  • Investment in literacy and in primary and secondary education, as well as access to college and advanced education for some workers, has greatly increased overall capability
  • A generally richer and more varied existence, including easy access to books, media, new technologies and the ability to travel long distances, has probably had a positive impact on our ability to comprehend and deal with complex issues.

A free download of the entire book is available from the author’s website.  I’ll leave it to you to evaluate the author’s arguments for why the two curves in this sketch have the shape that they do.  To my mind, these arguments have a lot of merit.

The point where these two curves cross – potentially a few decades into the future – will represent a new kind of transition point for the economy – perhaps the mother of all economic disruptions.  Yes, there will still be some new jobs created.  Indeed, in a blogpost last year, “Accelerating automation and the future of work“, I listed 20 new occupations that people could be doing in the next 20 years:

  1. Body part maker
  2. Nano-medic
  3. Pharmer of genetically engineered crops and livestock
  4. Old age wellness manager/consultant
  5. Memory augmentation surgeon
  6. ‘New science’ ethicist
  7. Space pilots, tour guides and architects
  8. Vertical farmers
  9. Climate change reversal specialist
  10. Quarantine enforcer
  11. Weather modification police
  12. Virtual lawyer
  13. Avatar manager / devotees / virtual teachers
  14. Alternative vehicle developers
  15. Narrowcasters
  16. Waste data handler
  17. Virtual clutter organiser
  18. Time broker / Time bank trader
  19. Social ‘networking’ worker
  20. Personal branders

However, the lifetimes of these jobs (before they too can be handled by improved robots) will shrink and shrink.  For a less esoteric example, consider the likely fate of a relatively new profession, radiology.  As Martin Ford explains:

A radiologist is a medical doctor who specializes in interpreting images generated by various medical scanning technologies. Before the advent of modern computer technology, radiologists focused exclusively on X-rays. This has now been expanded to include all types of medical imaging, including CT scans, PET scans, mammograms, etc.

To become a radiologist you need to attend college for four years, and then medical school for another four. That is followed by another five years of internship and residency, and often even more specialized training after that. Radiology is one of the most popular specialties for newly minted doctors because it offers relatively high pay and regular work hours; radiologists generally don’t need to work weekends or handle emergencies.

In spite of the radiologist’s training requirement of at least thirteen additional years beyond high school, it is conceptually quite easy to envision this job being automated. The primary focus of the job is to analyze and evaluate visual images. Furthermore, the parameters of each image are highly defined since they are often coming directly from a computerized scanning device. Visual pattern recognition software is a rapidly developing field that has already produced significant results…

Radiology is already subject to significant offshoring to India and other places. It is a simple matter to transmit digital scans to an overseas location for analysis. Indian doctors earn as little as 10 percent of what American radiologists are paid… Automation will often come rapidly on the heels of offshoring, especially if the job focuses purely on technical analysis with little need for human interaction. Currently, U.S. demand for radiologists continues to expand because of the increase in use of diagnostic scans such as mammograms. However, this seems likely to slow as automation and offshoring advance and become bigger players in the future. The graduating medical students who are now rushing into radiology for its high pay and relative freedom from the annoyances of dealing with actual patients may eventually come to question the wisdom of their decision

Radiologists are far from being the only “high-skill” occupation that is under risk from this trend.  Jobs which involve a high degree of “expert system” knowledge will come under threat from increasingly expert AI systems.  Jobs which involve listening to human speech will come under threat from increasingly accurate voice recognition systems.  And so on.

This leaves two questions:

  1. Can we look forward, as some singularitarians and radical futurists assert, to incorporating increasing technological smarts within our own human nature, allowing us in a sense to merge with the robots of the future?  In that case, a scenario of “the robots will take all our jobs” might change to “substantially enhanced humans will undertake new types of work”
  2. Alternatively, if robots do much more of the work needed within society, how will the transition be handled, to a society in which humans have much more leisure time?

I’ll return to the first of these questions in a subsequent blogpost.  Martin Ford’s book has a lot to say about the second of these questions.  And he recommends a series of ideas for consideration:

  • Without large numbers of well-paid consumers able to purchase goods, the global economy risks going into decline, at the same time as technology has radically improved
  • With fewer people working, there will be much less income tax available to governments.  Taxation will need to switch towards corporation tax and consumption taxes
  • With more people receiving handouts from the state, there’s a risk of loss of many of aspects of economic structure which previously have been thought essential
  • We need to give more thought, now, to ideas for differential state subsidy of different kinds of non-work activity – to incentivise certain kinds of activity.  That way, we’ll be ready for the increasing disturbances placed on our economy by the rise of the robots.

For further coverage of these and related ideas, see Martin Ford’s blog on the subject, http://econfuture.wordpress.com/.

28 December 2010

Some suggested books for year-end reading

Looking for suggestions on books to read, perhaps over the year-end period of reflection and resolution for renewal?

Here are my comments on five books I’ve finished over the last few months, each of which has given me a lot to think about.

Switch: How to change things when change is hard – by Chip & Dan Heath

I had two reasons for expecting I would like this book:

I was not disappointed.  The book is full of advice that seems highly practical – advice that can be used to overcome all kinds of obstacles that people encounter when trying to change something for the better.  The book helpfully lists some of these obstacles in a summary chapter near its end.  They include:

  • “People here don’t see the need for change”
  • “People resist my idea because they say, ‘We’ve never done it like that before'”
  • “We should do doing something, but we’re getting bogged down in analysis”
  • “The environment has shifted, and we need to overcome our old patterns of behaviour”
  • “People here simply aren’t motivated to change”
  • “People here keep saying ‘It will never work'”
  • “I know what I should be doing, but I’m not doing it”
  • “I’ll change tomorrow”…

Each chapter has profound insights.  I particularly liked the insight that, from the right perspective, the steps to create a solution are often easier than the problem itself.  This is a pleasant antidote to the oft-repeated assertion that solutions need to be more profound, more complex, or more sophisticated, that the problems they address.  On the contrary, change efforts frequently fail because the change effort is focussing on the wrong part of the big picture.  You can try to influence either the “rider”, the “elephant”, or the “path” down which the elephant moves.  Spend your time trying to influence the wrong part of this combo, and you can waste a great deal of energy.  But get the analysis right, and even people who appear to hate change can embrace a significant transformation.  It all depends on the circumstance.

The book offers nine practical steps – three each for the three different parts of this model:

  • Direct the rider: Find the bright spots; Script the critical moves; Point to the destination
  • Motivate the elephant: Find the feeling; Shrink the change; Grow your people
  • Shape the path: Tweak the environment; Build habits; Rally the herd.

These steps may sound trite, but these simple words summarise, in each case, a series of inspirational examples of real-world change.

The happiness advantage: The seven principles of positive psychology that fuel success and performance at work – by Shawn Achor

“The happiness advantage” shares with “Switch” the fact that it is rooted in the important emerging discipline of positive psychology.  But whereas “Switch” addresses the particular area of change management, “The happiness advantage” has a broader sweep.  It seeks to show how a range of recent findings from positive psychology can be usefully applied in a work setting, to boost productivity and performance.  The author, Shawn Achor, describes many of these findings in the context of the 10 years he spent at Harvard.  These findings include:

  • Rather than the model in which people work hard and then achieve success and then become happy, the causation goes the other way round: people with a happy outlook are more creative, more resilient, and more productive, are able to work both harder and smarter, and are therefore more likely to achieve success in their work (Achor compares this reversal of causation to the “Copernican revolution” which saw the sun as the centre of the solar system, rather than the earth)
  • Our character (including our degree of predisposition to a happy outlook) is not fixed, but can be changed by activity – this is an example of neural plasticity
  • “The Tetris effect”: once you train your brain to spot positive developments (things that merit genuine praise), that attitude increasingly becomes second nature, with lots of attendant benefits
  • Rather than a vibrant social support network being a distraction from our core activities, it can provide us with the enthusiasm and the community to make greater progress
  • “Falling up”: the right mental attitude can gain lots of advantage from creative responses to situations of short-term failure
  • “The Zorro circle”: rather than focussing on large changes, which could take a long time to accomplish, there’s great merit in restricting attention to a short period of time (perhaps one hour, or perhaps just five minutes), and to a small incremental improvement on the status quo.  Small improvements can accumulate a momentum of their own, and lead on to big wins!
  • Will power is limited – and is easily drained.  So, follow the “20 second rule”: take the time to rearrange your environment – such as your desk, or your office – so that the behaviour you’d like to happen is the easiest (“the default”).  When you’re running on auto-pilot, anything that requires a detour of more than 20 seconds is much less likely to happen.  (Achor gives the example of taking the batteries out of his TV remote control, to make it less likely he would sink into his sofa on returning home and inadvertently watch TV, rather than practice the guitar as he planned.  And – you guessed it – he made sure the guitar was within easy reach.)

You might worry that this is “just another book about the power of positive thinking”.  However, I see it as a definite step beyond that genre.  This is not a book that seeks to paint on a happy face, or to pretend that problems don’t exist.  As Achor says, “Happiness is not the belief that we don’t need to change.  It is the realization that we can”.

Nonsense on stilts: how to tell science from bunk – by Massimo Pigliucci

Many daft, dangerous ideas are couched in language that sounds scientific.  Being able to distinguish good science from “pseudoscience” is sometimes called the search for a “demarcation principle“.

The author of this book, evolutionary biologist Massimo Pigliucci, has strong views about the importance of distinguishing science from pseudoscience.  To set the scene, he gives disturbing examples such as people who use scientific-sounding language to deny the connection between HIV and AIDS (and who often advocate horrific, bizarre treatments for AIDS), or who frighten parents away from vaccinating their children by quoting spurious statistics about links between vaccination and autism.  This makes it clear that the subject is far from being an academic one, just for armchair philosophising.  On the other hand, attempts by philosophers of science such as Karl Popper to identify a clear, watertight demarcation principle all seem to fail.  Science is too varied an enterprise to be capable of a simple definition.  As a result, it can take lots of effort to distinguish good science from bad science.  Nevertheless, this effort is worth it.  And this book provides a sweeping, up-to-date survey of the issues that arise.

The book brought me back to my own postgraduate studies from 1982-1986.  My research at that time covered the philosophy of mind, the characterisation of pseudo-science, creationism vs. Darwinism, and the shocking implications of quantum mechanics.  All four of these areas were covered in this book – and more besides.

It’s a book with many opinions.  I think it gets them about 85% right.  I particularly liked:

  • His careful analysis of why “Intelligent Design” is bad science
  • His emphasis on how pseudoscience produces no new predictions, but is intellectually infertile
  • His explanation of the problems of parapsychology (studies of extrasensory perception)
  • The challenges he lays down to various fields which appear grounded in mainstream science, but which are risking divergence away from scientific principles – fields such as superstring theory and SETI (the search for extraterrestrial intelligence).

Along the way, Pigliucci shares lots of fascinating anecdotes about the history of science, and about the history of philosophy of science.  He’s a great story-teller.

The master switch: the rise and fall of information empires – by Tim Wu

Whereas “Nonsense on stilts” surveys the history of science, and draws out lessons about the most productive ways to continue to find out deeper truths about the world, “The master switch” surveys many aspects of the modern history of business, and draws out lessons about the most productive ways to organise society so that information can be shared in the most effective way.

The author, Tim Wu, is a professor at Columbia Law School, and (if anything) is an even better story-teller than Pigliucci.  He gives rivetting accounts of many of the key episodes in various information businesses, such as those based on the telephone, radio, TV, cinema, cable TV, the personal computer, and the Internet.  Lots of larger-than-life figures stride across the pages.  The accounts fit together as constituents of an over-arching narrative:

  • Control over information technologies is particularly important for the well-being of society
  • There are many arguments in favour of centralised control, which avoids wasteful inefficiencies of competition
  • Equally, there are many arguments in favour of decentralised control, with open access to the various parts of the system
  • Many information industries went through one (or more phases) of decentralised control, with numerous innovators working independently, before centralisation took place (or re-emerged)
  • Government regulation sometimes works to protect centralised infrastructure, and sometimes to ensure that adequate competition takes place
  • Opening up an industry to greater competition often introduces a period of relative chaos and increased prices for consumers, before the greater benefits of richer innovation have a chance to emerge (often in unexpected ways)
  • The Internet is by no means the first information industry for which commentators had high, idealistic hopes: similar near-utopian visions also accompanied the emergence of broadcast radio and of cable television
  • A major drawback of centralised control is that too much power is vested in just one place – in what can be called a “master switch” – allowing vested interests to drastically interfere with the flow of information.

AT&T – the company founded by Bell – features prominently in this book, both as a hero, and as a villain.  Wu describes how AT&T suppressed various breakthrough technologies (including magnetic disk recording, usable in answering machines) for many years, out of a fear that they would damage the company’s main business.  Similarly, RCA suppressed FM radio for many years, and also delayed the adoption of electronic television.  Legal delays were often a primary means to delay and frustrate competitors, whose finances lacked such deep pockets.

Wu often highlights ways in which business history could have taken different directions.  The outcome that actually transpired was often a close-run thing, compared to what seemed more likely at the time.  This emphasises the contingent nature of much of history, rather than events being inevitable.  (I know this from my own experiences at Symbian.  Recent articles in The Register emphasise how Symbian nearly died at birth, well before powering more than a quarter of a billion smartphones.  Other stories, as yet untold, could emphasise how the eventual relative decline of Symbian was by no means a foretold conclusion either.)

But the biggest implications Wu highlights are when the stories come up to date, in what he sees as a huge conflict between powers that want to control modern information technology resources, and those that prefer greater degrees of openness.  As Wu clarifies, it’s a complex landscape, but Apple’s iPhone approach aims at greater centralised design control, whereas Google’s Android approach aims at enabling a much wider number of connections – connections where many benefits arise, without the need to negotiate and maintain formal partnerships.

Compared to previous information technologies, the Internet has greater elements of decentralisation built into it.  However, the lessons of the previous chapters in “The master switch” are that even this decentralisation is vulnerable to powerful interests seizing control and changing its nature.  That gives greater poignancy to present-day debates over “network neutrality” – a term that was coined by Wu in a paper he wrote in 2002.

Sex at dawn: the prehistoric origins of modern sexuality – by Christopher Ryan and Cacilda Jetha

(Sensitive readers should probably stop reading now…)

In terms of historical sweep, this last book outdoes all the others on my list.  It traces the origins of several modern human characteristics far into prehistory – to the time before agriculture, when humans existed as nomadic hunter-gatherers, with little sense of personal exclusive ownership.

This book reminds me of this oft-told story:

It is said that when the theory of evolution was first announced it was received by the wife of the Canon of Worcester Cathedral with the remark, “Descended from the apes! My dear, we will hope it is not true. But if it is, let us pray that it may not become generally known.”

I’ve read a lot on evolution over the years, and I think the evidence husband and wife authors Christopher Ryan and Cacilda Jetha accumulate chapter after chapter, in “Sex at dawn”, is reasonably convincing – even though elements of present day “polite society” may well prefer this evidence not to become “generally known”.  The authors tell a story with many jaw-dropping episodes.

Among other things, the book systematically challenges the famous phrase from Thomas Hobbes in Leviathan that, absent a government, people would lead lives that were “solitary, poor, nasty, brutish, and short”.  On the contrary, the book marshals evidence, direct and indirect, that pre-agricultural people could enjoy relatively long lives, with ample food, and a strong sense of community.  Key to this mode of existence was “fierce sharing”, in which everyone felt a strong obligation to share food within the group … and not only food.  The X-rated claim in the book is that the sharing extended to “parallel multi-male, multi-female sexual relationships”, which bolstered powerful community identities.  Monogamy is, therefore, far from being exclusively “natural”.  Evidence in support of this conclusion includes:

  • Comparisons to behaviour in bonobos and chimps – the apes which are our closest evolutionary cousins
  • The practice in several contemporary nomadic tribes, in which children are viewed as having many fathers
  • Various human anatomical features, copulatory behaviour, aspects of sperm wars, etc.

In this analysis, human sexual nature developed under one set of circumstances for several million years, until dramatic changes in relatively recent times with the advent of agriculture, cities, and widespread exclusive ownership.  Social philosophies (including religions) have sought to change the norms of behaviour, with mixed success.

I’ll leave the last words to Ryan and Jetha, from their online FAQ:

We’re not recommending anything other than knowledge, introspection, and honesty. In fact, as we say in the book, we’re not really sure what to do with this information ourselves.

19 September 2010

Our own entrenched enemies of reason

Filed under: books, deception, evolution, intelligence, irrationality, psychology — David Wood @ 3:39 pm

I’m a pretty normal, observant guy.  If there was something as large as an elephant in that room, then I would have seen it – sure as eggs are eggs.  I don’t miss something as large as that.  So someone who says, afterwards, that there was an elephant there, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

Here’s another version of the same, faulty, line of reasoning:

I’m a pretty good police detective.  Over the years, I’ve developed the knack of knowing when people are telling the truth.  That’s what my experience has taught me.  I know when a confession is for real.  I don’t get things like that wrong.  So someone who says, afterwards, that the confession was forced, or that the criminal should get off on a technicality, must have some kind of screw loose, or some kind of twisted ulterior motivation.  Gosh, what kind of person are they?

And another:

I’m basically a moral person.  I don’t knowingly cause serious harm to my fellow human beings.  I don’t get things as badly wrong as that.  I’m not that kind of person.  So if undeniable evidence subsequently emerges that I really did seriously harm a group of people, well, these people must have deserved it.  They were part of a bad crowd.  I was actually doing society a favour.  Gosh, don’t you know, I’m one of the good guys.

Finally, consider this one:

I’m basically a savvy, intelligent person.  I don’t make major errors in reasoning.  If I take the time to investigate a religion and believe in it, I must be right.  All that investment of time and belief can’t have been wrong.  Perish the thought.  If that religion makes a prophecy – such as the end of the world on a certain date – then I must be right to believe it.  If the world subsequently appears not to have ended on that date, then it must have been our faith, and our actions, that saved the world after all.  Or maybe the world ended in an invisible, but more important way.  The kingdom of heaven has been established within. Either way, how right we were!

It can sometimes be fun to observe the self-delusions of the over-confident.  Psychologists talk about “cognitive dissonance”, when someone’s deeply held beliefs appear to be contradicted by straightforward evidence.  That person is forced to hold two incompatible viewpoints in mind at the same time: I deeply believe X, but I seem to observe not-X.  Most people are troubled by this kind of dissonance.  It’s psychologically uncomfortable.  And because it can be hard for them to give up their underlying self-belief that “If I deeply believe X, I must have good reasons to do so”, it can lead them into outlandish hoops and illogical jumps to deny the straightforward evidence.  For them, rather than “seeing is believing”, the saying becomes inverted: “believing is seeing”.

As I said, it can be fun to see the daft things people have done, to resolve their cognitive dissonance in favour of maintaining their own belief in their own essential soundness, morality, judgement, and/or reasoning.  It can be especial fun to observe the mental gymnastics of people with fundamentalist religious and/or political faith, who refuse to accept plain facts that contradict their certainty.  The same goes for believers in alien abduction, for fan boys of particular mobile operating systems, and for lots more besides.

But this can also be a deadly serious topic:

  • It can result in wrongful imprisonments, with the prosecutors unwilling to face up to the idea that their over-confidence was misplaced.  As a result, people spend many years of their life unjustly incarcerated.
  • It can result in families being shattered under the pressures of false “repressed memories” of childhood abuse, seemingly “recovered” by hypnotists and subsequently passionately believed by the apparent victims.
  • It can split up previously happy couples, who end up being besotted, not with each other, but with dreadful ideas about each other (even though “there’s always two sides to a story”).
  • Perhaps worst of all, it can result in generations-long feuds and wars – such as the disastrous entrenched enmity of the Middle East – with each side staunchly holding onto the view “we’re the good guys, and anything we did to these other guys was justified”.

Above, I’ve retold some of the thoughts that occurred to me as I recently listened to the book “Mistakes Were Made (But Not by Me): Why We Justify Foolish Beliefs, Bad Decisions, and Hurtful Acts”, by veteran social psychologists Carol Tavris and Elliot Aronson.  (See here for this book’s website.)  At first, I found the book to be a very pleasant intellectual voyage.  It described, time and again, experimental research that should undermine anyone’s over-confidence about their abilities to observe, remember, and reason.  (I’ll come back to that research in a moment).  It reviewed real-life examples of cognitive dissonance – both personal examples and well-known historical examples.  So far, so good.  But later chapters made me more and more serious – and, frankly, more and more angry – as they explored horrific examples of miscarriages of justice (the miscarriage being subsequently demonstrated by the likes of DNA evidence), family breakups, and escalating conflicts and internecine violence.  All of this stemmed from faulty reasoning, brought on by self-justification (I’m not the kind of person who could make that kind of mistake) and by over-confidence in our own thinking skills.

Some of the same ground is covered in another recent book, “The invisible gorilla – and other ways our intuition deceives us”, by Christopher Chabris and Daniel Simons.  (See here for the website accompanying this book.)  The gorilla in the title refers to the celebrated experiment where viewers are asked to concentrate on one set of activity – counting the number of passes made by a group of basketball players – and often totally fail to notice someone in a gorilla suit wandering through the crowd of players.  Gorilla?  What gorilla?  Don’t be stupid!  If there had been a gorilla there, I would have seen it, sure as eggs are eggs.

Chapter by chapter, “The invisible gorilla” reviews evidence that we tend to be over-confident in our own abilities to observe, remember, and reason.  The chapters cover:

  • Our bias to think we would surely observe anything large and important that happened
  • Our bias to think our memories are reliable
  • Our bias to think that people who express themselves confidently are more likely to be trustworthy
  • Our bias to think that we would give equal weight to evidence that contradicts our beliefs, as to evidence that supports our beliefs (the reality is that we search high and low for confirming evidence, and quickly jump to reasons to justify ignoring disconfirming evidence)
  • Our bias to think that correlation implies causation: that if event A is often followed by event B, then A will be the cause of B
  • Our bias to think there are quick fixes that will allow significant improvements in our thinking power – such as playing classical music to babies (an effect that has been systematically discredited)
  • Our bias to think we can do many things simultaneously (“multi-task”) without any individual task being affected detrimentally.

These biases probably all were useful to Homo sapiens at an early phase of our evolutionary history.  But in the complex society of the present day, these biases do us more harm than good.

Added together, the two books provide sobering material about our cognitive biases, and about the damage that all too often follows from us being unaware of these biases.

“Mistakes were made (but not by me)” adds the further insight that we tend to descend gradually into a state of gross over-confidence.  The book frequently refers to the metaphor of a pyramid.  Before we make a strong commitment, we are often open-minded.  We could go in several different directions.  But once we start down any of the faces in the pyramid, it becomes harder and harder to retract – and we move further away from people who, initially, were in the very same undecided state as us.  The more we follow a course of action, the greater our commitment to defend all the time and energy we’ve committed down that path.  I can’t have taken a wrong decision, because if I had, I would have wasted all that time and energy, and that’s not the kind of person I am. So they invest even more time and energy, walking yet further down that pyramid of over-confidence, in order to maintain their own self-image.

At root, what’s going wrong here is what psychologists call self-justification.  Once upon a time, the word pride would have been used.  We can’t bear to realise that our own self-image is at fault, so we continue to take actions – often harmful actions – in support of our self-image.

The final chapters of both books offer hope.  They give examples of people who are able to break out of this spiral of self-justification.  It isn’t easy.

An important conclusion is that we should put greater focus on educating people about cognitive biases.  Knowing about a cognitive bias doesn’t make us immune to it, but it does help – especially when we are still only a few rungs down the face of the pyramid.  As stated in the conclusion of “The invisible gorilla”:

One of our messages in this book is indeed negative: Be wary of your intuitions, especially intuitions about how your own mind works.  Our mental systems for rapid cognition excel at solving the problems they evolved to solve, but our cultures, societies, and technologies today are much more complex than those of our ancestors.  In many cases, intuition is poorly adapted to solving problems in the modern world.  Think twice before you decide to trust intuition over rational analysis, especially in important matters, and watch out for people who tell you intuition can be a panacea for decision-making ills…

But we also have an affirmative message to leave you with.  You can make better decisions, and maybe even get a better life, if you do your best to look for the invisible gorillas in the world around you…  There may be important things right in front of you that you aren’t noticing due to the illusion of attention.  Now that you know about this illusion, you’ll be less apt to assume you’re seeing everything there is to see.  You may think you remember some things much better than you really do, because of the illusion of memory.  Now that you understand this illusion, your trust your own memories, and that of others, a bit less, and you’ll try to corroborate your memory in important situations.  You’ll recognise that the confidence people express often reflects their personalities rather than their knowledge, memory, or abilities…  You’ll be skeptical of claims that simple tricks can unleash the untapped potential in your mind, but you’ll be aware than you can develop phenomenal levels of expertise if you study and practice the right way.

Similarly, we should also take more care to widely explain the benefits of the scientific approach, which searches for disconfirming evidence as must as it searches for confirming evidence.

That’s the pro-reason approach to encouraging better reasoning.  But reason, by itself, often isn’t enough.  If we are going to face up to the fact that we’ve made grave errors of judgement, which have caused pain, injustice, and sometimes even death and destruction, we frequently need powerful emotional support.  To enable us to admit to ourselves that we’ve made major mistakes, it greatly helps if we can find another image of ourselves, which sees us as making better contributions in the future.  That’s the pro-hope approach to encouraging better reasoning.  The two books have examples of each approach.  Both books are well worth reading.  At the very least, you may get some new insight as to why discussions on Internet forums often descend into people seemingly talking past each other, or why formerly friendly colleagues can get stuck into an unhelpful rut of deeply disliking each other.

11 September 2010

No escape from technology

Filed under: books, evolution, Kurzweil, UKH+ — David Wood @ 1:51 am

We can never escape the bio-technological nexus and get “back to nature” – because we have never lived in nature.

That sentence, from the final chapter of Timothy Taylor’s “The Artificial Ape: How technology changed the course of human evolution“, sums up one of my key takeaways from this fine book.

It’s a book that’s not afraid to criticise giants.  Aspects of Charles Darwin’s thinking are examined and found wanting.  Modern day technology visionary Ray Kurzweil also comes under criticism:

The claims of Ray Kurzweil (that we are approaching a critical moment when biology will be overtaken by artificial constructs) … lack a critical historical – and prehistoric – perspective…

Kurzweil argues that the age of machines is upon us …  and that technology is reaching a point where it can innovate itself, producing ever more complex forms of artificial intelligence.  My argument in this book is that, scary or not, none of this is new.  Not only have we invented technology, from the stone tools to the wheeled wagon, from spectacles to genetic engineering, but that technology, within a framework of some 2 to 3 million years, has, physically and mentally, made us.

Taylor’s book portrays the emergence of humanity as a grand puzzle.  From a narrow evolutionary perspective, humans should not have come into existence.  Our heads are too large. In many cases, they’re too large to pass through the narrow gap in their mother’s pelvis.  Theory suggests, and fossils confirm, that the prehistoric change from walking on all fours to walking upright had the effect of narrowing this gap in the pelvis.  The resulting evolutionary pressures should have resulted in smaller brains.  Yet, after several eons, the brain, instead, became larger and larger.

That’s just the start of the paradox.  The human baby is astonishingly vulnerable.  Worse, it makes its mother increasingly vulnerable too.  How could “survival of the fittest” select this ridiculously unfit outcome?

Of course, a larger brain has survival upsides as well as survival downsides.  It enables greater sociality, and the creation of sophisticated tools, including weapons.  But Taylor marshalls evidence that suggests that the first use of tools by pre-humans long pre-dated the growth in head size.  This leads to the suggestion that two tools, in particular, played vital roles in enabling the emergence of the larger brain:

  • The invention of a slings, made from fur, that enabled mothers to carry their infants hands-free
  • The invention of cooking, with fire, that made it easier for nourishment to be quickly obtained from food.

To briefly elaborate the second point: walking upright means the digestive gut becomes compressed.  It becomes shorter.  There’s less time for nourishment to be extracted from food.  Moreover, a larger head increases the requirements for fast delivery of nourishment.  Again, from a narrow evolutionary point of view, the emergence of big-brained humans makes little sense.  But cooking comes to the rescue.  Cooking, along with the child-carrying sling, are two examples of technology that enable the emergence of humans.

The resulting creatures – us – are weaker in a pure biological sense that our evolutionary forebears.  Without our technological aides, we would fare poorly in any contest of survival with other apes.  It is only the combination of technology-plus-nature that makes us stronger.

We’re used to thinking that the development of tools took place in parallel with increasing pre-human intelligence.  Taylor’s argument is that, in a significant way, the former preceded the latter.  Without the technology, the pre-human brain could not expand.

The book uses this kind of thinking to address various other puzzles:

  • For example, the technology-impoverished natives from the tip of South America that Darwin met on his voyage of discovery on the Beagle, had eyesight that was far better than even the keenest eyed sailor on the ship.  Technological progress went hand-in-hand with a weakening of biological power.
  • Taylor considers the case of the aborigines of Tasmania, who were technologically backward compared to those of mainland Australia: they lacked all clothing, and apparently could not make fire for themselves.  The archeological record indicates that the Tasmanian aborigines actually lost the use of various technologies over the course of several millenia.  Taylor reaches a different conclusion from popular writer Jared Diamond, who seems to take it for granted that this loss of technology made the aborigines weaker.  Taylor suggests that, in many ways, these aborigines became stronger and fitter, in their given environment, as they abandoned their clothing and their fishing tools.

There are many other examples – but I’ll leave it to you to read the book to find out more.  The book also has some fascinating examples of ancient tools.

I think that Taylor’s modifications of Darwin’s ideas are probably right.  What of his modifications of Kurzweil’s ideas?  Is the technological spurt of the present day really “nothing new”?  Well, yes and no.  I believe Kurzweil is correct to point out that the kinds of changes that are likely to be enabled by technology in the relatively near future – perhaps in the lifetime of many people who are already alive – are qualitatively different from anything that has gone before:

  • Technology might extend our lifespans, not just by a percentage, but by orders of magnitude (perhaps indefinitely)
  • Technology might create artificial intelligences that are orders of magnitude more powerful than any intelligence that has existed on this planet so far.

As I’ve already mentioned in my previous blogpost – which I wrote before starting to read Taylor’s book – Timothy Taylor is the guest speaker at the September meeting of the UK chapter of Humanity+.  People who attend will have the chance to hear more details of these provocative theories, and to query them direct with the author.  There will also be an opportunity to purchase signed copies of his book.  I hope to see some of you there!

I’ll give the last words to Dr Taylor:

Technology, especially the baby-carrying sling, allowed us to push back our biological limits, trading in our physical strength for an increasingly retained infantile early helplessness that allowed our brains to expand, forming themselves under increasingly complex artificial conditions…  In terms of brain growth, the high-water mark was passed some 40,000 years ago.  The pressure on that organ has been off ever since we started outsourcing intelligence in the form of external symbolic storage.  That is now so sophisticated through the new world information networking systems that what will emerge in future may no longer be controlled by our own volition…

[Technology] could also destroy our planet.  But there is no back-to-nature solution.  There never has been for the artificial ape.

1 May 2010

Costs of complexity: in healthcare, and in the mobile industry

Filed under: books, business model, disruption, healthcare, innovation, modularity, simplicity — David Wood @ 11:56 am

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

That sentence comes from page 92 of “The Innovator’s Prescription: A disruptive solution for health care“, co-authored by Clayton Christensen, Jerome Grossman, and Jason Hwang.  Like all the books authored (or co-authored) by Christensen, the book is full of implications for fields outside the particularly industry being discussed.

In the case of this book, the subject matter is critically important in its own right: how can we find ways to allow technological breakthroughs to reduce the spiralling costs of healthcare?

In the book, the authors brilliantly extend and apply Christensen’s well-known ideas on disruptive change to the field of healthcare.  But the book should be recommended reading for anyone interested in either strategy or operational effectiveness in any hi-tech industry.  (It’s also recommended reading for anyone interested in the future of medicine – which probably includes all of us, since most of us can anticipate spending increasing amounts of time in hospitals or doctor’s surgeries as we become older.)

I’m still less than half way through reading this book, but the section I’ve just read seems to speak loudly to issues in the mobile industry, as well as to the healthcare industry.

It describes a manufacturing plant which was struggling with overhead costs.  At this plant, 6.2 dollars were spent in overhead expenses for every dollar spend on direct labour:

These overhead costs included not just utilities and depreciation, but the costs of scheduling, expediting, quality control, repair and rework, scrap maintenance, materials handling, accounting, computer systems, and so on.  Overhead comprised all costs that were not directly spent in making products.

The quality of products made at that plant was also causing concern:

About 15 percent of all overhead costs were created by the need to repair and rework products that failed in the field, or had been discovered by inspectors as faulty before shipment.

However, it didn’t appear to the manager that any money was being wasted:

The plant hadn’t been painted inside or out in 20 years.  The landscaping was now overrun by weeds.  The receptionist in the bare-bones lobby had been replaced long ago with a paper directory and a phone.  The manager had no secretarial assistance, and her gray World War II vintage steel desk was dented by a kick from some frustrated predecessor.

Nevertheless, this particular plant had considerably higher overhead burden rates than the other plants from the same company.  What was the difference?

The difference was in the complexity.  This particular plant was set up to cope with large numbers of different product designs, whereas the other plants (which had been created later) had been able to optimise for particular design families.

The original plant essentially had the value proposition,

We’ll make any product that anyone designs

In contrast, the newer plants had the following kind of value proposition:

If you need a product that can be made through one of these two sequences of operations and activities, we’ll do it for you at the lowest possible cost and the highest possible quality.

Further analysis, across a number of different plants, reached the following results:

Each time the scale of a plant doubled, holding the degree of pathway complexity constant, the overhead rate could be expected to fall by 15 percent.  So, for example, a plant that made two families and generated $40 million in sales would be expected to have an overhead burden ratio of about 2.85, while the burden rate for a plant making two families with $80 million in sales would be 15% lower (2.85 x 0.85 = 2.42).  But every time the number of families produced in a plant of a given scale doubled, the overhead burden rate soared 27 percent.  So if a two-pathway, $40 million plant accepted products that required two additional pathways, but that did not increase its sales volume, its overhead burden rate would increase by 2.85 x 1.27, to 3.62…

This is just one aspect of a long and fascinating analysis.  Modern day general purpose hospitals support huge numbers of different patient care pathways, so high overhead rates are inevitable.  The solution is to allow the formation of separate specialist units, where practitioners can then focus on iteratively optimising particular lines of healthcare.  We can already see this in firms that specialise in laser eye surgery, in hernia treatment, and so on.  Without these new units separating and removing some of the complexity of the original unit, it becomes harder and harder for innovation to take place.  The innovation becomes stifled under conflicting business models.  (I’m simplifying the argument here: please take a look at the book for the full picture.)

In short: reducing overhead costs isn’t just a matter of “eliminating obvious inefficiencies, spending less time on paperwork, etc”.  It often requires initially painful structural changes, in which overly complex multi-function units are simplified by the removal and separation of business lines and product pathways.  Only with the new, simplified set up – often involving new companies, and sometimes involving “creative destruction” – can disruptive innovations flourish.

Rising organisational complexity impacts the mobile industry too.  I’ve written about this before.  For example, in May last year I wrote an article “Platform strategy failure modes“:

The first failure mode is when a device manufacturer fails to have a strategy towards mobile software platforms.  In this case, the adage holds true that a failure to strategise is a strategy to fail.  A device manufacturer that simply “follows the wind” – picking platform P1 for device D1 because customer C1 expressed a preference for P1, picking platform P2 for device D2 because customer C2 expressed a preference for P2, etc – is going to find that the effort of interacting successfully with all these different platforms far exceeds their expectations.  Mobile software platforms require substantial investment from manufacturers, before the manufacturer can reap commercial rewards from these platforms.  (Getting a device ready to demo is one thing.  That can be relatively easy.  Getting a device approved to ship onto real networks – a device that is sufficiently differentiated to stand out from a crowd of lookalike devices – can take a lot longer.)

The second failure mode is similar to the first one.  It’s when a device manufacturer spreads itself  too thinly across multiple platforms.  In the previous case, the manufacturer ended up working with multiple platforms, without consciously planning that outcome.  In this case, the manufacturer knows what they are doing.  They reason to themselves as follows:

  • We are a highly competent company;
  • We can manage to work with (say) three significant mobile software platforms;
  • Other companies couldn’t cope with this diversification, but we are different.

But the outcome is the same as the previous case, even though different thinking gets the manufacturer into that predicament.  The root failure is, again, a failure to appreciate the scale and complexity of mobile software platforms.  These platforms can deliver tremendous value, but require significant ongoing skill and investment to yield that kind of result.

The third failure mode is when a manufacturer seeks re-use across several different mobile software platforms.  The idea is that components (whether at the application or system level) are developed in a platform-agnostic way, so they can fit into each platform equally well.

To be clear, this is a fine goal.  Done right, there are big dividends.  But my observation is that this strategy is hard to get right.  The strategy typically involves some kind of additional “platform independent layer”, that isolates the software in the component from the particular programming interfaces of the underlying platform.  However, this additional layer often introduces its own complications…

Seeking clever economies of scale is commendable.  But there often comes time when growing scale is bedevilled by growing complexity.  It’s as mentioned at the beginning of this article:

While indeed there are economies of scale, there are countervailing costs of complexity – the more product families produced in a plant, the higher the overhead burden rates.

Even more than a drive to scale, companies in the mobile space need a drive towards simplicity. That means organisational simplicity as well as product simplicity.

As I stated in my article “Simplicity, simplicity, simplicity“:

The inherent complexity of present-day smartphones risks all kinds of bad outcomes:

  • Smartphone device creation projects may become time-consuming and delay-prone, and the smartphones themselves may compromise on quality in order to try to hit a fast-receding market window;
  • Smartphone application development may become difficult, as developers need to juggle different programming interfaces and optimisation methods;
  • Smartphone users may fail to find the functionality they believe is contained (somewhere!) within their handset, and having found that functionality, they may struggle to learn how to use it.

In short, smartphone system complexity risks impacting manufacturability, developability, and usability.  The number one issue for the mobile industry, arguably, is to constantly find better ways to tame this complexity.

The companies that are successfully addressing the complexity issue seem, on the whole, to be the ones on the rise in the mobile space.

Footnote: It’s a big claim, but it may well be true that of all the books on the subject of innovation in the last 20 years, Clayton’s Christensen’s writings are the most consistently important.  The subtitle of his first book, “The innovator’s dilemma”, is a reminder why: “When new technologies cause great firms to fail“.

5 April 2010

The ascent of money: huge opportunities and huge risks

Filed under: books, Economics, predictability — David Wood @ 9:36 pm

The turning point of the American Civil War.  The defeat of Napoleon.  The lead-up to the French Revolution.  The decline of Imperial Spain.  These chapters of history all have intriguing back stories – according to Harvard professor Niall Ferguson, in his book “The Ascent of Money: A Financial History of the World“.

The back stories, each time, refer to the strengths and weakness of evolving financial systems.

Appreciating these back stories isn’t just an intellectual curiosity.  It provides rich context for the view that financial systems are sophisticated and complex entities that deserve much wider understanding.  Without this understanding, it’s all too easy for people to hold one or other overly-simplistic understanding of financial systems, such as:

  • Financial systems are all fundamentally flawed;
  • Financial systems are all fundamentally beneficial;
  • There are “sure thing” investments which people can learn about;
  • Financial systems should be picked apart – the world would be better off without them;
  • Markets are inherently insane;
  • Markets are inherently sane;
  • Bankers (and their ilk) deserve our scorn;
  • Bankers (and their ilk) deserve our deep gratitude.

As the book progresses, Ferguson sweeps forwards and backwards throughout history, gradually building up a fuller picture of evolving financial systems:

  • The banking system;
  • Government bonds;
  • Stock markets;
  • Insurance and securities;
  • The housing market;
  • Hedge funds;
  • Globalisation;
  • The growing role of China in financial systems.

Like me, Ferguson was born in Scotland.  I was struck by the number of Scots-born heroes and villains the book introduces, including an infamous Glaswegian loan shark, the creators of the first true insurance company, officers of the companies involved in the Anglo-China “Opium Wars”, and Andrew Law – instigator in France of one of history’s first great stock market bubbles.  Of course, many non-Scots have starring roles too – including Shakespeare’s Shylock, the Medicis, the Rothschilds, George Soros, the managers of Enron, Milton Friedman, and John Maynard Keynes.

Time and again, Ferguson highlights lessons for the present day.  Yes, new financial systems can liberate great amounts of creativity.  Innovation in financial systems can provide significant benefits for society.  But, at the same time, financial systems can be mis-managed, with dreadful consequences.  One major contributory cause of mis-managing these systems is when people lack a proper historical perspective – for example, when the experience of leading financiers is just of times of growth, rather than times of savage decline.

Among many fascinating episodes covered in the book, I found two to be particularly chilling:

  • The astonishing (in retrospect) over-confidence of observers in the period leading up to the First World War, that any such war could not possibly happen;
  • The astonishing (in retrospect) over-confidence of the managers of the Long Term Capital Management (LTCM) hedge fund, that their fund could not possibly fail.

Veteran journalist Hamish McRae describes some of the pre-WWI thinking in his review of Ferguson’s book in The Independent:

The 19th-century globalisation ended with the catastrophe of the First World War. It is really scary to realise how unaware people were of the fragility of those times. In 1910, the British journalist Norman Angell published The Great Illusion, in which he argued that war between the great powers had become an economic impossibility because of “the delicate interdependence of international finance”.

In spring 1914 an international commission reported on the Balkan Wars of 1912-13. The British member of the commission, Henry Noel Brailsford, wrote: “In Europe the epoch of conquest is over and save in the Balkans perhaps on the fringes of the Austrian and Russian empires, it is as certain as anything in politics that the frontiers of our national states are finally drawn. My own belief is that there will be no more war among the six powers.”

And Ferguson re-tells the story of LTCM in his online article “Wall Street Lays Another Egg” (which also covers many of the other themes from his book):

…how exactly do you price a derivative? What precisely is an option worth? The answers to those questions required a revolution in financial theory. From an academic point of view, what this revolution achieved was highly impressive. But the events of the 1990s, as the rise of quantitative finance replaced preppies with quants (quantitative analysts) all along Wall Street, revealed a new truth: those whom the gods want to destroy they first teach math.

Working closely with Fischer Black, of the consulting firm Arthur D. Little, M.I.T.’s Myron Scholes invented a groundbreaking new theory of pricing options, to which his colleague Robert Merton also contributed. (Scholes and Merton would share the 1997 Nobel Prize in economics.) They reasoned that a call option’s value depended on six variables: the current market price of the stock (S), the agreed future price at which the stock could be bought (L), the time until the expiration date of the option (t), the risk-free rate of return in the economy as a whole (r), the probability that the option will be exercised (N), and—the crucial variable—the expected volatility of the stock, i.e., the likely fluctuations of its price between the time of purchase and the expiration date (s). With wonderful mathematical wizardry, the quants reduced the price of a call option to this formula (the Black-Scholes formula).

Feeling a bit baffled? Can’t follow the algebra? That was just fine by the quants. To make money from this magic formula, they needed markets to be full of people who didn’t have a clue about how to price options but relied instead on their (seldom accurate) gut instincts. They also needed a great deal of computing power, a force which had been transforming the financial markets since the early 1980s. Their final requirement was a partner with some market savvy in order to make the leap from the faculty club to the trading floor. Black, who would soon be struck down by cancer, could not be that partner. But John Meriwether could. The former head of the bond-arbitrage group at Salomon Brothers, Meriwether had made his first fortune in the wake of the S&L meltdown of the late 1980s. The hedge fund he created with Scholes and Merton in 1994 was called Long-Term Capital Management.

In its brief, four-year life, Long-Term was the brightest star in the hedge-fund firmament, generating mind-blowing returns for its elite club of investors and even more money for its founders. Needless to say, the firm did more than just trade options, though selling puts on the stock market became such a big part of its business that it was nicknamed “the central bank of volatility” by banks buying insurance against a big stock-market sell-off. In fact, the partners were simultaneously pursuing multiple trading strategies, about 100 of them, with a total of 7,600 positions. This conformed to a second key rule of the new mathematical finance: the virtue of diversification, a principle that had been formalized by Harry M. Markowitz, of the Rand Corporation. Diversification was all about having a multitude of uncorrelated positions. One might go wrong, or even two. But thousands just could not go wrong simultaneously.

The mathematics were reassuring. According to the firm’s “Value at Risk” models, it would take a 10-s (in other words, 10-standard-deviation) event to cause the firm to lose all its capital in a single year. But the probability of such an event, according to the quants, was 1 in 10^24—or effectively zero. Indeed, the models said the most Long-Term was likely to lose in a single day was $45 million. For that reason, the partners felt no compunction about leveraging their trades. At the end of August 1997, the fund’s capital was $6.7 billion, but the debt-financed assets on its balance sheet amounted to $126 billion, a ratio of assets to capital of 19 to 1.

There is no need to rehearse here the story of Long-Term’s downfall, which was precipitated by a Russian debt default. Suffice it to say that on Friday, August 21, 1998, the firm lost $550 million—15 percent of its entire capital, and vastly more than its mathematical models had said was possible. The key point is to appreciate why the quants were so wrong.

The problem lay with the assumptions that underlie so much of mathematical finance. In order to construct their models, the quants had to postulate a planet where the inhabitants were omniscient and perfectly rational; where they instantly absorbed all new information and used it to maximize profits; where they never stopped trading; where markets were continuous, frictionless, and completely liquid. Financial markets on this planet followed a “random walk,” meaning that each day’s prices were quite unrelated to the previous day’s, but reflected no more and no less than all the relevant information currently available. The returns on this planet’s stock market were normally distributed along the bell curve, with most years clustered closely around the mean, and two-thirds of them within one standard deviation of the mean. On such a planet, a “six standard deviation” sell-off would be about as common as a person shorter than one foot in our world. It would happen only once in four million years of trading.

But Long-Term was not located on Planet Finance. It was based in Greenwich, Connecticut, on Planet Earth, a place inhabited by emotional human beings, always capable of flipping suddenly and en masse from greed to fear. In the case of Long-Term, the herding problem was acute, because many other firms had begun trying to copy Long-Term’s strategies in the hope of replicating its stellar performance. When things began to go wrong, there was a truly bovine stampede for the exits. The result was a massive, synchronized downturn in virtually all asset markets. Diversification was no defense in such a crisis. As one leading London hedge-fund manager later put it to Meriwether, “John, you were the correlation.”

There was, however, another reason why Long-Term failed. The quants’ Value at Risk models had implied that the loss the firm suffered in August 1998 was so unlikely that it ought never to have happened in the entire life of the universe. But that was because the models were working with just five years of data. If they had gone back even 11 years, they would have captured the 1987 stock-market crash. If they had gone back 80 years they would have captured the last great Russian default, after the 1917 revolution. Meriwether himself, born in 1947, ruefully observed, “If I had lived through the Depression, I would have been in a better position to understand events.” To put it bluntly, the Nobel Prize winners knew plenty of mathematics but not enough history.

These episodes should remind us of the fragility of our current situation.  Indeed, as one of many potential future scenarios, Ferguson candidly discusses the prospects for a serious breakdown in relations between China and the west, akin to the breakdown of relations that precipitated the First World War.

In summary: I recommend this book, not only because it is full of intriguing anecdotes, but because it will help to raise awareness of the complex impacts of financial systems.  It will help boost general literacy about all aspects of money – and should, therefore, help us to be more effective in how collectively manage financial innovation.

Note: There are two editions of this book: one released in 2008, and one released in 2009.  The latter has a fuller account of the recent global financial crisis, and for that reason, is the better one to read.

« Newer PostsOlder Posts »

Blog at WordPress.com.