dw2

10 February 2013

Fixing bugs in minds and bugs in societies

Suppose we notice what appears to be bugs in our thinking processes. Should we try to fix these bugs?

Or how about bugs in the way society works? Should we try to fix these bugs too?

As examples of bugs of the first kind, I return to a book I reviewed some time ago, “Kluge: The Haphazard Construction of the Human Mind”. I entitled my review “The human mind as a flawed creation of nature”, and I still stick by that description. In that review, I pulled out the following quote from near to the end of the book:

In this book, we’ve discussed several bugs in our cognitive makeup: confirmation bias, mental contamination, anchoring, framing, inadequate self-control, the ruminative cycle, the focussing illusion, motivated reasoning, and false memory, not to mention absent-mindedness, an ambiguous linguistic system, and vulnerability to mental disorders. Our memory, contextually driven as it is, is ill suited to many of the demands of modern life, and our self-control mechanisms are almost hopelessly split. Our ancestral mechanisms were shaped in a different world, and our more modern deliberative mechanisms can’t shake the influence of that past. In every domain we have considered, from memory to belief, choice, language, and pleasure, we have seen that a mind built largely through the progressive overlay of technologies is far from perfect…

These bugs in our mental makeup are far from being harmless quirks or curiosities. They can lead us:

  • to overly trust people who have visual trappings of authority,
  • to fail to make adequate provision for our own futures,
  • to keep throwing money into bad investments,
  • and to jump to all kinds of dangerous premature conclusions.

But should we try to fix these bugs?

The field where the term ‘bug’ was first used in this sense of a mistake, software engineering, provides many cautionary tales of bug fixing going wrong:

  • Sometimes what appears to be a ‘bug’ in a piece of software turns out to be a useful ‘feature’, with a good purpose after all
  • Sometimes a fix introduces unexpected side-effects, which are worse than the bug which was fixed.

I shared an example of the second kind in the “Managing defects” chapter of the book I wrote in 2004-5, “Symbian for software leaders: principles of successful smartphone development projects”:

An embarrassing moment with defects

The first million-selling product that I helped to build was the Psion Series 3a handheld computer. This was designed as a distinct evolutionary step-up from its predecessor, the original Series 3 (often called the “Psion 3 classic” in retrospect)…

At last the day came (several weeks late, as it happened) to ship the software to Japan, where it would be flashed into large numbers of chips ready to assemble into production Series 3a devices. It was ROM version 3.20. No sooner was it sent than panic set into the development team. Two of us had independently noticed a new defect in the agenda application. If a user set an alarm on a repeating entry, and then adjusted the time of this entry, in some circumstances the alarm would fail to ring. We reasoned that this was a really bad defect – after all, two of us had independently found it.

The engineer who had written the engine for the application – the part dealing with all data manipulation algorithms, including calculating alarm times – studied his code, and came up with a fix. We were hesitant, since it was complex code. So we performed a mass code review: lots of the best brains in the team talked through the details of the fix. After twenty four hours, we decided the fix was good. So we recalled 3.20, and released 3.21 in its place. To our relief, no chips were lost in the process: the flashing had not yet started.

Following standard practice, we upgraded the prototype devices of everyone in the development team, to run 3.21. As we waited for the chips to return, we kept using our devices – continuing (in the jargon of the team) to “eat our own dog food”. Strangely, there were a few new puzzling problems with alarms on entries. Actually, it soon became clear these problems were a lot worse than the problem that had just been fixed. As we diagnosed these new problems, a sinking feeling grew. Despite our intense care (but probably because of the intense pressure) we had failed to fully consider all the routes through the agenda engine code; the change made for 3.21 was actually a regression on previous behaviour.

Once again, we made a phone call to Japan. This time, we were too late to prevent some tens of thousands of wasted chips. We put the agenda engine code back to its previous state, and decided that was good enough! (Because of some other minor changes, the shipping version number was incremented to 3.22.) We decided to live with this one defect, in order not to hold up production any longer.

We were expecting to hear more news about this particular defect from the Psion technical support teams, but the call never came. This defect never featured on the list of defects reported by end users. In retrospect, we had been misled by the fact that two of us had independently found this defect during the final test phase: this distorted our priority call…

That was an expensive mistake, which seared a cautionary attitude into my own brain, regarding the dangers of last-minute changes to complex software. All seasoned software engineers have similar tales they can tell, from their own experience.

If attempts to fix defects in software are often counter-productive, how much more dangerous are attempts to fix defects in our thinking processes – or defects in how our societies operate! At least in the first case, we generally still have access to the source code, and to the design intention of the original software authors. For the other examples, the long evolutionary history that led to particular designs is something at which we can only guess. It would be like trying to fix a software bug, that somehow results from the combination of many millions of lines of source code, written decades ago by people who left no documentation and who are not available for consultation.

What I’ve just stated is a version of an argument that conservative-minded thinkers often give, against attempts to try to conduct “social engineering” or “improve on nature”. Tinkering with ages-old thinking processes – or with structures within societies – carries the risk that we fail to appreciate many hidden connections. Therefore (the argument runs) we should desist from any such experimentation.

Versions of this argument appeared, from two different commentators, in responses to my previous blogpost. One put it like this:

The trouble is that ‘cognitive biases and engrained mistakes’ may appear dysfunctional but they are, in fact, evolutionarily successful adaptations of humanity to its highly complex environment. These, including prejudice, provide highly effective means for the resolution of really existing problems in human capacity…

Rational policies to deal with human and social complexity have almost invariably been proved to be inhumane and brutal, fine for the theoretician in the British Library, but dreadful in the field.

Another continued the theme:

I have much sympathy for [the] point about “cognitive biases and engrained mistakes”. The belief that one has identified cognitive bias in another or has liberated oneself from such can be a “Fatal Conceit,” to borrow a phrase from Hayek, and has indeed not infrequently given rise to inhumane treatment even of whole populations. One of my favourite sayings is David Hume’s “the rules of morality are not conclusions of our reason,” which is at the heart of Hayek’s Fatal Conceit argument.

But the conclusion I draw is different. I don’t conclude, “Never try to fix bugs”. After all, the very next sentence from my chapter on “Managing defects” stated, “We eventually produced a proper fix several months later”. Indeed, many bugs do demand urgent fixes. Instead, my conclusion is that bug fixing in complex systems needs a great deal of careful thought, including cautious experimentation, data analysis, and peer review.

The analogy can be taken one more step. Suppose that a software engineer has a bad track record in his or her defect fixes. Despite claiming, each time, to be exercising care and attention, the results speak differently: the fixes usually make things worse. Suppose, further, that this software engineer comes from a particular company, and that fixes from that company have the same poor track record. (To make this more vivid, the name of this company might be “Technocratic solutions” or “Socialista” or “Utopia software”. You can probably see where this argument is going…) That would be a reason for especial discomfort if someone new from that company is submitting code changes in attempts to fix a given bug.

Well, something similar happens in the field of social change. History has shown, in many cases, that attempts at mental engineering and social engineering were counter-productive. For that reason, many conservatives support various “precautionary principles”. They are especially fearful of any social changes proposed by people they can tar with labels such as “technocratic” or”socialist” or “utopian”.

These precautionary principles presuppose that the ‘cure’ will be worse than the ‘disease’. However, I personally have greater confidence in the fast improving power of new fields of science, including the fields that study our mind and brain. These improvements are placing ever greater understanding in our hands – and hence, ever greater power to fix bugs without introducing nasty side-effects.

For these reasons, I do look forward (as I said in my previous posting) to these improvements

helping individuals and societies rise above cognitive biases and engrained mistakes in reasoning… and accelerating a reformation of the political and economic environment, so that the outcomes that are rationally best are pursued, instead of those which are expedient and profitable for the people who currently possess the most power and influence.

Finally, let me offer some thoughts on the observation that “the rules of morality are not conclusions of our reason”. That observation is vividly supported by the disturbing “moral dumbfounding” examples discussed by Jonathan Haidt in his excellent book “The Happiness Hypothesis: Finding Modern Truth in Ancient Wisdom” (which I briefly reviewed here). But does that observation mean that we should stop trying to reason with people about moral choices?

MoralLandscapeHere, I’ll adapt comments from my review of “The Moral Landscape: How Science Can Determine Human Values”, by Sam Harris.

That book considers how we might go about finding answers to big questions such as “how should I live?” and “what makes some ways of life more moral than others?”  As some specific examples, how should we respond to:

  • The Taliban’s insistence that the education of girls is an abomination?
  • The stance by Jehovah’s Witnesses against blood transfusion?
  • The prohibition by the Catholic Church of the use of condoms?
  • The legalisation of same-sex relationships?
  • The use of embryonic stem cells in the search for cures of diseases such as Alzheimer’s and Parkinson’s?
  • A would-be Islamist suicide bomber who is convinced that his intended actions will propel him into a paradise of abundant mental well-being?

One response is that such questions are the province of religion. The correct answers are revealed via prophets and/or holy books.  The answers are already clear, to those with the eye of faith. It is a divine being that tells us, directly or indirectly, the difference between good and evil. There’s no need for experimental investigations here.

A second response is that the main field to study these questions is that of philosophy. It is by abstract reason, that we can determine the difference between good and evil.

But Sam Harris, instead, primarily advocates the use of the scientific method. Science enters the equation because it is increasingly able to identify:

  • Neural correlates (or other physical or social underpinnings) of sentient well-being
  • Cause-and-effect mechanisms whereby particular actions typically bring about particular changes in these neural correlates.

With the help of steadily improving scientific understanding, we can compare different actions based on their likely effects on sentient well-being. Actions which are likely to magnify sentient well-being are good, and those which are likely to diminish it are evil. That’s how we can evaluate, for example, the Taliban’s views on girls’ education.

As Harris makes clear, this is far from being an abstract, other-worldly discussion. Cultures are clashing all the time, with lots of dramatic consequences for human well-being. Seeing these clashes, are we to be moral relativists (saying “different cultures are best for different peoples, and there’s no way to objectively compare them”) or are we to be moral realists (saying “some cultures promote significantly more human flourishing than others, and are to be objectively preferred as a result”)? And if we are to be moral realists, do we resolve our moral arguments by deference to religious tradition, or by open-minded investigation of real-world connections?

In the light of these questions, here are some arguments from Harris’s book that deserve thought:

  • There’s a useful comparison between the science of human values (the project espoused by Harris), and a science of diets (what we should eat, in order to enjoy good health).  In both cases, we’re currently far from having all the facts.  And in both cases, there are frequently several right answers.  But not all diets are equally good.  Similarly, not all cultures are equally good.  And what makes one diet better than another will be determined by facts about the physical world – such as the likely effects (direct and indirect) of different kinds of fats and proteins and sugars and vitamins on our bodies and minds.  While people still legitimately disagree about diets, that’s not a reason to say that science can never answer such questions.  Likewise, present-day disagreements about specific causes of happiness, mental flourishing, and general sentient well-being, do not mean these causes fail to exist, or that we can never know them.
  • Likewise with the science of economics.  We’re still far from having a complete understanding of how different monetary and financial policies impact the long-term health of the economy.  But that doesn’t mean we should throw up our hands and stop searching for insight about likely cause and effect.  The discipline of economics, imperfect though it is, survives in an as-yet-incomplete state.  The same goes for political science too.  And, likewise, for the science of the moral landscape.
  • Attempts to reserve some special area of “moral insight” for religion are indefensible.  As Harris says, “How is it that most Jews, Christians, and Muslims are opposed to slavery? You don’t get this moral insight from scripture, because the God of Abraham expects us to keep slaves. Consequently, even religious fundamentalists draw many of their moral positions from a wider conversation about human values that is not, in principle, religious.” That’s the conversation we need to progress.

PS I’ve written more about cognitive biases and cognitive dissonance – and how we can transcend these mistakes – in my blogpost “Our own entrenched enemies of reason”.

26 March 2012

Short-cuts to sharper thinking?

Filed under: bias, futurist, intelligence, nootropics — David Wood @ 11:15 pm

What are the best methods to get our minds working well? Are there ways to significantly improve our powers of concentration, memory, analysis, and insight?

Some methods for cognitive enhancement are well known:

  • Get plenty of sleep
  • Avoid distracting environments
  • Practice concentration, to build up mental stamina
  • Augment our physical memories with external memories, whether in physical or electronic format, that we can consult again afterwards
  • Beware the sway of emotion – “when your heart’s on fire, smoke gets in your eyes”
  • Learn about cognitive fallacies and biases – and how to avoid them
  • Share our thinking with trusted friends and colleagues, who can provide constructive criticism
  • Listen to music which has the power both to soothe the mind and to stimulate it
  • Practice selected yoga techniques, which can provide a surge of mental energy
  • Get in touch with our “inner why”, that rekindles our motivation and focus.

Then there are lots of ideas about food and drink to partake, or to avoid. Caffeine provides at least a transient boost to concentration. Alcohol encourages creativity but weakens accurate discernment. Sugar can provide a short-term buzz, though (perhaps) at the cost of longer-term sluggishness. Claims have been made for ginseng, ginkgo biloba, ginger, dark chocolate, Red Bull, and many other foods and supplements.

But potentially the most dramatic effects could result from new compounds – compounds that are being specially engineered in the light of recent findings about the operation of the brain. The phrase “smart drugs” refers to something that could dramatically boost our mental powers.

Think of the character Eddie in the film Limitless, and of the mental superpowers he acquired from NZT, a designer pharmaceutical.

If a real-world version of NZT were offered to you, would you take it?

(Note: NZT has its own real-world website – which is a leftover part of a sophisticated marketing campaign for Limitless.)

I foresee four kinds of answer:

  1. No such drug could be created. This is just fiction.
  2. If such a drug existed, there would be risks of horrible side-effects (as indeed – spoiler alert! – happened in Limitless). It would be foolish to experiment.
  3. If such a drug existed, it would be immoral and/or inappropriate to take it. It’s unfair to short-circuit the effort required to actually make ourselves mentally sharper.
  4. Sure, bring it to me! – especially for mission-critical situations like major exams, job interviews, client bid preparation, project delivery deadlines, and for those social occasions when it’s particularly important to make a good impression.

My own answer: even though nothing as remarkable as NZT exists today, drugs with notable mental effects are going to become increasingly available over the next decade or so.  As well as being more widely available, the quality and reliability will increase too.

So we’re likely to be hearing more and more of the phrases “cognitive enhancers”, “smart drugs”, and “nootropics“.  We’ll all going to have to come to terms with weighing up the pros and cons of taking these enhancers.  And we’ll probably need to appreciate many variations and special cases.

Yes, there will be risks of side effects.  But it’s the same with other drugs and dietary supplements.  We need to collect and sift evidence, as it is most likely to apply to us.

For example: on the advice of my doctors, I take a small dose of aspirin every evening, and a statin.  These drugs are known to have side-effects in some cases.  So my GP ensured that I had a blood test after I’d been taking the statins for a while, to check there were no signs of the most prevalent side-effect.  In due course, genomic sequences might identify which people are more susceptible to particular side-effects.

Similarly with nootropics: the best effects are likely to arise from tailoring doses to the special circumstances of individual people, and to monitoring for unusual side effects.

There’s already lots of information online about various nootropics.  For example, see this Nootropics FAQ.  That’s a lot to take in!

Personally, for the next few years, I expect to continue to focus my own cognitive enhancement project on the methods I listed at the start of this article.  But I want to keep myself closely informed about developments in nootropics.  If the evidence of substantive beneficial effect becomes clearer, I’ll be ready to take full advantage.

Hmm, the likelihood is that I’m going to need to become smarter, in order to figure out when it’s wise to try to make myself smarter again by taking one or more nootropics.  But that first-stage mental enhancement can happen by immersing myself in a bunch of other smart people…

That’s one reason I’m looking forward to the London Futurist Meetup on the subject of nootropics that is taking place this Thursday (29th March), from 7pm, in the Lord Wargrave pub at 42 Brendon Street, London W1H 5HE.  It’s going to be a semi-informal discussion, with attendees being encouraged to talk about their own experiences, expectations, hopes, and fears about nootropics.  Hopefully, the outcome will be improved collective wisdom!

20 July 2008

Rationally considering the end of the world

Filed under: bias, prediction markets, risks — David Wood @ 8:38 pm

My day job at Symbian is, in effect, to ensure that my colleagues in the management team don’t waken up to some surprising news one morning and say, “Why didn’t we see this coming?“. That is, I have to anticipate so-called “Predictable surprises“. Drawing on insight from both inside and outside of the company, I try to keep my eye on emerging disruptive trends in technology, markets, and society, in case these trends have the potential to reach some kind of tipping point that will significantly impact Symbian’s success (for good, or for ill). And once I’ve reached the view that a particular trend deserves closer attention, it’s my job to ensure that the company does devote sufficient energy to it – in sufficient time to avoid being “taken by surprise”.

For the last few days, I’ve pursued my interest in disruptive trends some way outside the field of smartphones. I booked a holiday from work in order to attend the conference on Global Catastrophic Risks that’s been held at Oxford University’s James Martin 21st Century School.

Instead of just thinking about trends that could destabilise smartphone technology and smatphone markets, I’ve been immersed in discussions about trends that could destabilise human technology and markets as a whole – perhaps even to the extent of ending human civilisation. As well as the more “obvious” global catastrophic risks like nuclear war, nuclear terrorism, global pandemics, and runaway climate change, the conference also discussed threats from meteor and comet impacts, gamma ray bursts, bioterrorism, nanoscale manufacturing, and super-AI.

Interesting (and unnerving) as these individual discussions were, what was even more thought-provoking was the discussion on general obstacles to clear-thinking about these risks. We all suffer from biases in our thinking, that operate at both individual and group levels. These biases can kick into overdrive when we begin to comtemplate global catastrophes. No wonder some people get really hot and bothered when these topics are discussed, or else suffer strong embarrassment and seek to change the topic. Eliezer Yudkowsky considered one set of biases in his presentation “Rationally considering the end of the world“. James Hughes covered another set in “Avoiding Millennialist Cognitive Biases“, as did Jonathan Wiener in “The Tragedy of the Uncommons” and Steve Rayner in “Culture and the Credibility of Catastrophe“. There were also practical examples of how people (and corporations) often misjudge risks, in both “Insurance and catastrophes” by Peter Taylor and “Probing the Improbable. Methodological Challenges for Risks with Low Probabilities and High Stakes” by Toby Order and co-workers.

So what can we do, to set aside biases and get a better handle on the evaluation and prioritisation of these existential risks? Perhaps the most innovative suggestion came in the presentation by Robin Hanson, “Catastrophe, Social Collapse, and Human Extinction“. Robin is one of the pioneers of the notion of “Prediction markets“, so perhaps it is no surprise that he floated the idea of markets in tickets to safe refuges where occupants would have a chance of escaping particular global catastrophes. Some audience members appeared to find the idea distasteful, asking “How can you gamble on mass death?” and “Isn’t it unjust to exclude other people from the refuge?” But the idea is that these markets would allow a Wisdom of Crowds effect to signal to observers which existential risks were growing in danger. I suspect the idea of these tickets to safe refuges will prove impractical, but anything that will help us to escape from our collective biases on these literally earth-shattering topics will be welcome.

(Aside: Robin and Eliezer jointly run a fast throughput blog called “Overcoming bias” that is dedicated to the question “How can we obtain beliefs closer to reality?”)

Robin’s talk also contained the memorable image that the problem with slipping on a staircase isn’t that of falling down one step, but of initiating an escalation effect of tumbling down the whole staircase. Likewise, the biggest consequences of the risks covered in the conference aren’t that they will occur in isolation, but that they might trigger a series of inter-related collapses. On a connected point, Peter Taylor mentioned that the worldwide re-insurance industry would have collapsed altogether if a New Orleans scale weather-induced disaster had followed hot on the heels of the 9-11 tragedies – the system would have had no time to recover. It was a sobering reminder of the potential fragility of much of what we take for granted.

Footnote: For other coverage of this conference, see Ronald Bailey’s comments in Reason. There’s also a 500+ page book co-edited by Nick Bostrom and Milan Cirkovic that contains chapter versions of many of the presentations from the conference (plus some additional material).

Blog at WordPress.com.